Skip to content

csv

A row of old books

How to Download Your OverDrive History in CSV Format

The Background

As a family we read a ton. I started reading to my kids nightly when they were young and to this day we still have story time almost every night. It is a highlight of our family time together. As a result, my kids also love to read. This became an issue when we found ourselves visiting the library multiple times a week to get more books for my son. So, eventually we just started maxing out our library card. It was not unusual for us to have 50 books checked out. That, however, brings its own set of issues. When you have to return the books it can be a challenge to find them all!

Then we found out that our library partners with OverDrive and we can get books for our kids in e-book format with the click of a few buttons. (You should be hearing Handel’s Messiah playing in your head right now) Our world changed forever. Gone were the multiple trips to the library or searching high and low for missing books.

After much research we found out that we could get Amazon Fire tablets for our kids to use as e-readers (strictly e-readers) and we could load them up with e-books. Many of the books on OverDrive will let you check them out in Kindle format. Then, you simply use Amazon’s parent dashboard and you can share the selected Kindle books to your child’s tablet. It’s a bit of a wonky process, but it works pretty well. Now, we have a different issue entirely. Our kids are flying through books like crazy and I’m constantly researching for new series. Oh well, I guess it’s a good problem to have.

Downloading the history

Well, the other day I wanted to get the history of all the books we have checked out with OverDrive. To my dismay, there was not a way to do this. But, hey I’m a programmer! I can do hard things. So, I decided to work something up. At first I thought I was going to have to screen scrape the data from the OverDrive history page and programmatically next through every history page to compile it all. But then I noticed that when I clicked between the pages of history that the screen did not refresh, which suggested to me that a web service was being called. Voila! After looking through the networking calls being made by OverDrive I discovered their REST API and found out it is pretty easy to use and has a ton of information. So I set to work and here is the fruit of my labor. It is a browser bookmarklet. Simply drag the link below to your web browser’s bookmarks bar to create the bookmarklet. Then, when you are on your OverDrive history page click the bookmarklet. It will download all your history data and compile it into a CSV file that you can open in Excel or any other similar program. I hope it works well for you. Please leave a comment if you find it to be useful.

Bookmarklet:
Download OverDrive History <- drag this to your bookmarks toolbar

The code

Here is the code for the bookmarklet in case you are curious.

var OverDriveHistory = {
	baseURL: window.location.origin,
	restURL: '/rest/readingHistory?page={0}&perPage={1}&sortBy={2}',
	header: 'Title,Sub Title,Author,Series,Publisher,Publish Date,Star Rating,Star Rating Count,Maturity Level,ISBN,Cover Art URL,Borrow Date,Type',
	csv: '',
	currentPage: 1,
	lastPage: -1,
	totalItems: -1,
	pageSize: 100,
	sortBy: '',
	error: false,
	init: function(){
		var t = this;
		if(t.isOverDriveHistoryPage()) {
			if(typeof jQuery == 'undefined') {
				t.getJavaScript('//code.jquery.com/jquery-latest.min.js', function(){
					t.start();
				});
			} else {
				t.start();
			}
		} else {
			alert('Please run this bookmarklet from your OverDrive history page.');
		}
	},
	start: function(){
		var t = this;
		var url = t.baseURL + t.restURL.replace(/\{0\}/g, '1').replace(/\{1\}/g, t.pageSize).replace(/\{2\}/g, t.sortBy);
		
		t.sortBy = jQuery('.AccountSortOptions-sort').val();
		t.showOverlay();
		
		jQuery.ajax({
			url: url
		})
		.done(function(data) {
			t.lastPage = data.links.last.page;
			t.totalItems = data.totalItems;
			t.csv += t.header + '\r\n';
			t.getData();
		})
		.fail(function() {
			t.error = true;
			t.finalize();
		});
	},
	getJavaScript: function(url, success){
		var script = document.createElement('script');
		var head = document.getElementsByTagName('head')[0];
		var done = false;
		
		script.src = url;
		script.onload = script.onreadystatechange = function(){
			if(!done && (!this.readyState || this.readyState == 'loaded' || this.readyState == 'complete')) {
				done = true;
				success();
				script.onload = script.onreadystatechange = null;
				head.removeChild(script);
			}
		};

		head.appendChild(script);
	},
	isOverDriveHistoryPage: function(){
		return window.location.href.toLowerCase().indexOf('overdrive.com/account/history') == -1 ? false : true;
	},
	getData: function(){
		var t = this;
		var url  = '';
		var progress = '';

		if (t.currentPage == t.lastPage + 1) {
			t.finalize();
		} else {
			//Set progress
			if (t.currentPage != t.lastPage) {
				progress = (((t.currentPage * t.pageSize) - t.pageSize) + 1) + '-' + (t.currentPage * t.pageSize) + ' of ' + t.totalItems;
			} else {
				progress = (((t.currentPage * t.pageSize) - t.pageSize) + 1) + '-' + t.totalItems + ' of ' + t.totalItems;
			}
			jQuery('#history-fetch-progress').text(progress);

			url = t.baseURL + t.restURL.replace(/\{0\}/g, t.currentPage).replace(/\{1\}/g, t.pageSize).replace(/\{2\}/g, t.sortBy);
			jQuery.ajax({
				url: url
			})
			.done(function(data) {
				var isbn = '';
				for(var i = 0; i <data.items.length; i++){
					isbn = '';
					for (var x = 0; x < data.items[i].formats.length; x++) {
						if(typeof data.items[i].formats[x].isbn != 'undefined'){
							isbn = data.items[i].formats[x].isbn;
							break;
						}
					}
					
					//Title,Sub Title,Author,Series,Publisher,Publish Date,Star Rating,Star Rating Count,Maturity Level,ISBN,Cover Art URL,Borrow Date,Type
					t.csv += t.escapeCSV(data.items[i].title) + ','
					+ t.escapeCSV(data.items[i].subtitle) + ','
					+ t.escapeCSV(data.items[i].firstCreatorName) + ','
					+ t.escapeCSV(data.items[i].series) + ','
					+ t.escapeCSV(data.items[i].publisher.name) + ','
					+ t.escapeCSV(data.items[i].publishDate) + ','
					+ t.escapeCSV(data.items[i].starRating) + ','
					+ t.escapeCSV(data.items[i].starRatingCount) + ','
					+ t.escapeCSV(data.items[i].ratings.maturityLevel.name) + ','
					+ t.escapeCSV(isbn) + ','
					+ t.escapeCSV(data.items[i].covers.cover510Wide.href) + ','
					+ t.escapeCSV(data.items[i].historyAddDate) + ','
					+ t.escapeCSV(data.items[i].type.name)
					+ '\r\n';
				}

				t.currentPage += 1;
				t.getData();
			})
			.fail(function() {
				t.error = true;
				t.finalize();
			});
		}
	},
	escapeCSV: function(value){
		var t = this;
		var newValue = value;
		
		if(!newValue){
			newValue = "";
		} else {
			newValue = newValue.toString();
		}
		
		if(newValue.indexOf('"') != -1 || newValue.indexOf(',') != -1 || newValue.indexOf('\r') != -1 || newValue.indexOf('\n') != -1){
			newValue = '"' + newValue.replace(/"/g,'""') + '"';
		}
		
		return newValue;
	},
	showOverlay: function(){
		var t = this;
		var html = '';
		var progress = '';
		
		progress = 'initializing';

		html = '<div id="history-fetch-overlay" style="position: fixed;width: 100%;height: 100%;top: 0;left: 0;right: 0;bottom: 0;background-color: rgba(0,0,0,0.5);z-index: 1000;"><div style="position: absolute;top: 50%;left: 50%;transform: translate(-50%,-50%);color: white;text-align:center;"><div style="font-size: 50px;">Fetching data. Please wait.</div><div id="history-fetch-progress" style="font-size: 30px;">' + progress + '</div></div></div>';
		
		jQuery('body').append(html);
	},
	removeOverlay: function(){
		jQuery('#history-fetch-overlay').remove();
	},
	finalize: function(){
		var t = this;
		if(!t.error) {
			var fileName = "OverDriveHistory.csv";

			if (window.navigator.msSaveOrOpenBlob){
				// IE 10+
				var blob = new Blob([decodeURIComponent(encodeURI(t.csv))], {
					type: 'text/plain;charset=utf-8'
				});
				window.navigator.msSaveBlob(blob, fileName);
			} else {
				var pom = document.createElement('a');
				pom.setAttribute('href', 'data:text/plain;charset=utf-8,' + encodeURIComponent(t.csv));
				pom.setAttribute('download', fileName);
				document.body.appendChild(pom);
				pom.click();
				document.body.removeChild(pom);
			}
		} else {
			alert('Something went wrong. Please try again.');
		}
		
		t.removeOverlay();
	}
};

OverDriveHistory.init();

Update

2/21/2022 – Added publication type and download progress
3/22/2021 – Initial creation

Convert CSV data into a SQL table

Recently I have been working with some report data that is stored statically in a database in CSV format. I had a need to have that CSV data displayed in a tabular format. I searched around and couldn’t find anything written to do this in SQL, so I decided to write a stored procedure to do this very thing. It will take a CSV and parse it out into a SQL table. For instance if I have report data that looks like this…

"FirstName","LastName","DOB","Children"
"John","Smith","1/5/1980","Bob,Sally,Joe,Chris"
"Jane","Smith","2/25/1982","Bob,Sally,Joe,Chris"
"Bruce","Wayne","5/13/1975",""
"Peter","Parker","5/23/1970",""
"","","",""

It would be transformed into this …

FIRSTNAMELASTNAMEDOBCHILDREN
JohnSmith1/5/1980Bob,Sally,Joe,Chris
JaneSmith2/25/1982Bob,Sally,Joe,Chris
BruceWayne5/13/1975 
PeterParker5/23/1970 
    

Note that in the last column the commas are preserved because they are within the quotation mark text qualifiers. Below is the stored procedure that does this. Feel free to use it, and leave a comment if you found it useful.

-- ==========================================================================================
-- Author:  Davin Studer
-- Create date: 4/5/2011
-- Description: This will take a CSV input
-- and transform it into a table
--
-- Params:
-- @string          - The CSV string
-- @textQualifier   - Character that denotes a string within the CSV
-- @columnDelimiter - Character that denotes different columns ... doesn't have to be a comma
-- @rowOneIsHeader  - Does the first row contain column names?
-- ==========================================================================================
create procedure [dbo].[CSVToTable]
    @string nvarchar(max) = '',
    @textQualifier varchar(1) = '',
    @columnDelimiter varchar(1) = ',',
    @rowOneIsHeader bit = 0
as
begin
    -- set nocount on added to prevent extra result sets from
    -- interfering with select statements.
    set nocount on;
 
    -- We need and input string
    if isnull(@string, '') = ''
    begin
        raiserror ('Please supply an CSV string', 15, 1)
        return
    end
 
    -- We need and column delimiter
    if isnull(@columnDelimiter, '') = ''
    begin
        raiserror ('Please supply a column delimiter.', 15, 1)
        return
    end
  
    -- Make sue the user doesn't pass null as a value
    if isnull(@textQualifier, '') = ''
    begin
        set @textQualifier = ''
    end
 
    -- Make sue the user doesn't pass null as a value
    if isnull(@rowOneIsHeader, '') = ''
    begin
        set @rowOneIsHeader = 0
    end
 
    declare
        @columns int = 1,
        @columnNames nvarchar(max) = '',
        @stop bit = 0,
        @position int = 0,
        @temp nvarchar(1) = '',
        @dataStart int = 0,
        @sql nvarchar(max) = '',
        @qualifierToggle bit = 0,
        @tempString nvarchar(max) = '',
        @delimiterReplacementUTFNumber int = 2603
 
    -- Get rid of the ##tempCSVSplitToTable table if it exists
    if object_id('tempdb..##tempCSVSplitToTable') is not null
    begin
        drop table ##tempCSVSplitToTable
    end
 
    -- Get rid of white space
    set @string = rtrim(ltrim(@string))
 
    -- Set the EOL to char(13)
    set @string = replace(@string, char(13) + char(10), char(13))
    set @string = replace(@string, char(10), char(13))
 
    -- Deal with the delimiter character within the text qualifier characters
    if @textQualifier <> ''
    begin
        while @position <> len(@string)
        begin
            set @temp = substring(@string, @position, 1)
            if @temp = @textQualifier
            begin
                if @qualifierToggle = 0
                begin
                    set @qualifierToggle = 1
                end
                else
                begin
                    set @qualifierToggle = 0
                end
            end
            if @temp = @columnDelimiter and @qualifierToggle = 1
            begin
                set @tempString = @tempString + nchar(@delimiterReplacementUTFNumber) -- replace with UTF delimiter replacement character
            end
            else
            begin
                set @tempString = @tempString + @temp
            end
            set @position = @position + 1
        end
 
        set @string = @tempString
    end
 
    -- Get rid of text qualifier ... we don't need it now
    if @textQualifier <> ''
    begin
        set @string = replace(@string, @textQualifier, '')
    end
 
    -- Get column names
    set @position = 1
    while @stop = 0
    begin
        set @temp = substring(@string, @position, 1)
        if @temp = @columnDelimiter
        begin
            set @columns = @columns + 1
            set @columnNames = @columnNames + ','
        end
        else if @temp = char(13)
        begin
            set @stop = 1
        end
        else
        begin
            set @columnNames = @columnNames + @temp
        end
        set @position = @position + 1
    end
 
    set @dataStart = @position
 
    if @rowOneIsHeader = 0
    begin
        set @dataStart = 1
        set @position = 1
        set @columnNames = ''
        while @position - 1 < @columns
        begin
            set @columnNames = @columnNames + ',Column' + cast(@position as varchar(1))
            set @position = @position + 1
        end
        set @columnNames = substring(@columnNames, 2, len(@columnNames))
    end
 
    -- Build ##tempCSVSplitToTable table
    set @sql = @sql + 'create table ##tempCSVSplitToTable (' + char(13) + '['
    set @stop = 0
    set @position = 1
    while @stop = 0
    begin
        set @temp = substring(@columnNames, @position, 1)
        if @temp <> ','
        begin
            set @sql = @sql + @temp
        end
        else
        begin
            set @sql = @sql + '] nvarchar(max),' + char(13) + '['
        end
   
        set @position = @position + 1
 
        if @position - 1 = len(@columnNames)
        begin
            set @sql = @sql + '] nvarchar(max)'
            set @stop = 1
        end
    end
    set @sql = @sql + ')' + char(13)
    exec (@sql)
 
    -- insert values into ##tempCSVSplitToTable table
    set @position = @dataStart
    while @position - 1 < len(@string)
    begin
        set @stop = 0
        set @sql = 'insert into ##tempCSVSplitToTable ([' + replace(@columnNames, ',', '],[') + ']) values ('''
        while @stop = 0
        begin
            set @temp = substring(@string, @position, 1)
 
            -- end of column
            if @temp = @columnDelimiter
            begin
                set @sql = @sql + ''','''
            end
            -- EOL
            else if @temp = char(13) or datalength(@temp) = 0
            begin
                set @stop = 1
            end
            -- deal with apostrophe in data
            else if @temp = ''''
            begin
                set @sql = @sql + ''''''
            end
            -- column data that isn't an apostrophe
            else
            begin
                set @sql = @sql + @temp
            end
 
            set @position = @position + 1
        end 
        set @sql = @sql + ''')'
 
        -- Get rid of any UTF delimiter replacements that were put in to take the place of the delimiter character within the text qualifier
        if @textQualifier <> ''
        begin
            set @sql = replace(@sql, nchar(@delimiterReplacementUTFNumber), @columnDelimiter)
        end
 
        exec(@sql)
    end
 
    select * from ##tempCSVSplitToTable
 
    -- destroy ##tempCSVSplitToTable table
    if object_id('tempdb..##tempCSVSplitToTable') is not null
    begin
        drop table ##tempCSVSplitToTable
    end
end