Blog

check it out

it occurred to me that rather than migrating ILS charges by creating FOLIO loans, aka POSTing to /circulation/loans, it might be less complicated to simply check the relevant items out to the relevant borrowers via /circulation/check-out-by-barcode and follow up with /circulation/loans/{id}/change-due-date to modify due dates as needed

one of the benefits of this approach was the opportunity to test the circ rules to make sure that I had gotten them right (smile)

however, of the 4,898 lines in the chargeData db two did not load because their itemBarcodes could not be found in FOLIO (sad)

it turned out that the relevant titles did not have holdingsRecords either and that both should've had Dewey call numbers that begin with zero - this was the clue that I needed to discover that I had left a line out of the bibLoad.pl script for 0XX call numbers that are in the STACKS - now I'm going to cp bibLoad.pl zeroFix.pl, edit zeroFix.pl so that it considers a list of catalog keys rather than every MARC in the db, pull a list of relevant catalog keys from the ILS, and then double check to make sure that Bob's my uncle...

There are 495,508 of them - the process of crafting the bibLoad.pl script, and testing it as each component was added, took much longer than I expected. But once it started it took less than a week for it to complete it's work.

Now we are testing random titles to make sure that they arrived safely in FOLIO.

the beginning of the end

at least it feels that way

I've finished coding and testing bibLoad.pl and am going to go for a walk and not think about it for a little while.

then I'm going to load the not-bibliographic stuff, delete said stuff from the ILS, harvest all of the MARCs one last time, and run bibLoad.pl for real

titles w/out items

they shouldn't exist but there are 7,550 of them

we decided to put them aside so now there are two files:

  1. titlesWithoutItems.mrc, and
  2. titlesWithoutItems.txt which contains the classification scheme and call number since the above MARCs do not have 999s since they have no items 

after hunting down and fixing the last batch (there were about 1,600 of them) of broken call numbers (they weren't really broken but they were wrong) which included either vol., no., pt., or ser. but not the all important |z it was time to continue going thru the list of instance fields for which MARC parsing code had not yet been written

it wan't long before I came upon instances.languages - I went into the sandbox to edit an instance and found that adding a language required me to select a language, not a code, from a very long select box - so I looked in Settings to find the list of languages and codes (which I expected would jive with the MARC Code List for Languages) and couldn't find it - when I looked at the JSON document for the instance that I had just edited what I found was the code, not the language - so how does FOLIO know?

my next thought was that it might be like contributorNameTypes which is a db table that API knows about but Settings does not - couldn't find anything in the list of APIs that looked like it would give me a clue re languages

my next thought was that this might be included in the "reference data" which I asked EBSCO not to load - the answer to that question could be answered by running the bibLoad.pl script against the production server

so I did that and the script threw an error because the contributorNameTypes and indentifierTypes that I loaded in production are slightly different than those that are in the sandbox

so I updated my bibLoad.pl script so that it would not throw those errors and realized that now I have to test the rest of those "types" - here's a checklist to help me keep track:

and, by the way, it turns out that the production server knows about the languages and their codes which is a very good thing but I still don't know where they are hiding - it doesn't really matter but I am very curious!



...today, exactly one week after doing that, I discovered and repaired approx 450 call numbers that didn't have enough pipes - I guess my original expectation that there would be very little data clean-up that would have to be done pre-migration was just a pipe dream!

I just found several call numbers with too many pipes. I was looking for call numbers that SHOULD have a pipe just before the z to indicate that the call number's analytic follows next. What I found where call numbers which had been copied and pasted out of the MARC's 050 and still included the subfield b (|b) causing the ILS to not find the |z that might follow later in the call number. That's bad enough but it turns out that the |b messes up the display of the call number in the catalog too. Ouch! 

The other issue that this brings up is that any data that I pull out of the ILS is going to pipe delimited which means that if there are too many pipes the data will get out-of-sync.

So, it's time to add "check the data for unwanted pipes before harvesting same" to the to-do list!

Configuration!

Following up on the work that was accomplished last week the following db tables were loaded up with all of the details necessary to begin loading instances, holdingsRecords, and items:

  • electronicAccessRelationships
  • identifierTypes
  • instanceNoteTypes
  • instanceStatuses
  • itemNoteTypes
  • loantypes
  • locations
  • loccamps
  • locinsts
  • loclibs
  • mtypes
  • natureOfContentTerms
  • servicepoints

There's still a few more db tables to load (contributorTypes and contributorNameTypes just to name the two that I can think of - hopefully there won't be too many more) and then a bunch of work to do to go thru the bibLoad.pl script and make sure that it's prepared to deal with all of the above. That will give me something to do next week.

my library card

Yesterday we had a good meeting with the folks at EBSCO re patron functionality within EDS (aka Scholar Search.)

When we Go Live! with FOLIO people will be able to log-in to EDS with their Drew credentials and manage their FOLIO patron account: see what they have checked out, renew items, place holds, etc.

EBSCO is going to set up an EDS sandbox so that we can test the system before our Go Live! date (which is still TBD BTW.)

FIT mtg follow-up

We got a lot done last week re sifting thru the different ways that the ILS and FOLIO do things. I'm looking forward to configuring several of FOLIO's db tables to be more granular and less redundant than we seem to have been able to accomplish in the ILS. We discussed the Service pointsLoan typesNotes (item)LocationsMaterial types, and Nature of content tables. We also identified some additional data clean-up re Item categories in the ILS that will benefit the migration. Now there is a bit of a mountain of work to accomplish in order to follow thru re all of the above. Once that is done we should be able to begin loading bib-recs!

I forgot to mention what I did last week!

What would be the best way to identify items in the circulating collection that belong to the GCAH? We use Item Categories in the ILS and so my first thought was to use Statisical Codes in FOLIO. After some discussion I realized that it would be both easier and better to use FOLIO's tags. We will talk about this more later this week. Other Item Categories include...

  • ACACOMP
  • DIPAOLO
  • DORN
  • HOLO
  • MUSIC-DEPT
  • OCLC
  • TEMP
  • THESIS-NA
  • VIDEO-LIB
  • WEATHERBEE
  • WHITE
  • YAN

 ...we need to take a closer look at those items and figure out what to do with them in FOLIO.

In preparation for a FIT meeting later this week I surveyed both Item Types and Locations in our ILS. FOLIO uses Material Types differently than the ILS uses Item Types. In the ILS the Circ Map considers both the Item Type and the Patron Profile when deciding whether or not an item can circ and for how long - locations are not a part of the equation. In FOLIO Material Types and Patron Groups and Locations can be a part of a Circulation Rule but they don't have to be. Understanding how Material Types work and what other attributes FOLIO uses to do what Item Types accomplish in the ILS will help us determine the best way to configure FOLIO.

P.S. I also loaded RDA Carrier Types into the instandFormats db table.

It's OK to start small

Call number types and Resource types were loaded today via API.

And so it begins

On Tuesday, May 26, Drew University entered into an agreement with EBSCO Information Services which will provide the University Library with a FOLIO tenant in their production environment (aka cloud.) 

The first questions that we had to answer were:

  • Do you want any of the reference data pre-loaded, or a completely blank shell?

In my experience I have found that a good portion of what they are calling reference data (e.g. pre-fab item types) don't end up getting used and others need to be created. I would rather load all of the data from scratch rather than have to sift thru and delete what we won't use and add what we will. So I asked them to make it a completely blank shell.

  • SMTP server for outgoing emails – Do you want to use our SMTP server or Drew University's SMTP server?

We will start out using theirs. We can't change this setting in the FOLIO UI but if we decide that we want to (we could either use Google or Drew's internal SMTP server) we can ask EBSCO support to do that for us.

On the morning of Friday, May 29, we got word that our tenant had been built along with usernames and passwords so that we could begin working in our new system. Two minutes later (co-incidentally - the meeting had already been booked) the FOLIO Implementation Team (aka FIT) met to discuss some of the philosophies and priorities about how to accomplish the work of configuring the new system, migrating our bibliographic and user data from our old system, and training ourselves how to use FOLIO and draft documentation.

A few notes from FIT's meeting:

  • the timeline
    • support for our current ILS ends on June 30th
    • EBSCO wants to know what our Go Live! date is so that they can track this to make sure that they can help us be successful with getting to that date 
      • when we Go Live! they will decommission the test server
      • we will have an internal Go Live! that will give us at least a month to test and fix problems before our official Go Live! date which is the one that we will ask EBSCO to help us be successful with
      • we will be able to estimate what those dates might be after the system has been configured and the loading of data has begun
        • I expect user data will take a few hours and that bibliographic data will take about a week  
    • it would be a very good thing for Go Live! to be at least a few weeks before the building opens up again (mid August?)
    • Andrew must certify that we have "destroyed or returrned" the software and documentation of our current ILS by Spetember 30
  • the migration
    • it doesn't have to happen all at once - we can move users, titles, call numbers, and barcodes and then make another pass for subsets of the data like
      • sammelbands
      • shadowed stuff
      • marc holdings
      • current charges
      • bills and fines
  • documentation
    • it was our brainstorming about this that led to the creation of the space in Confluence
    • last I checked there was no documentation for FOLIO - we should check the wicki again but we will probably end up creating our own
  • participating and contributing to the FOLIO community

The May 29 email from EBSCO included URLs, usernames, and passwords...not all of which I understood. I've followed up to ask about them and will make notes about all of that when I get those answers.