QGIS has georeferencing capabilities that allows an image of a historical map to become a raster layer by assigning points on the map geographic coordinates. This post experiments with said capabilities based on the tutorial from the Programming Historian. That tutorial was based on an earlier version of QGIS, and so there were additional experiments in trying to do the lesson with the current version of the program. 

The first item in the instructions was to install the GDAL Georeferencing plugin, but after some searching realized this was no longer a separate plugin, but a standard built in tool. The use of the tool was also a little different than described. 

The actual first steps involved setting up a new project, defining the CRS, and adding two vector layers, one being the coastline vector and the other displaying the lots of Prince Edward Island. From this point, I selected the Georeferencer found under the Raster menu.

This opened a separate window and a dialogue box. Again, the CRS had to be defined and the image of the historical map added.

Next was the adding of points and getting them to correspond to points on the map just started with the coastline and lot boundaries. Selecting the add point tool, you click on an easily definable spot on the historical map, along the coastline worked best. Once clicking such a spot, a dialogue box opened, and the key here is to select the option in the bottom left corner to enter coordinates from map canvas. Selecting this takes you to the main map so you can click the same spot on the map, giving the geographic position of the one map to the other. 

This process was then repeated several times, adding sufficient data for the computer to be able to chart the rest of the historical map based on these given points. 

coordinates set and ready to georeference

Then the georeferencing settings had to be set so the new raster could be named and given a file location, again do this by clicking the ellipsis instead of simply entering a name in the output raster field. Selecting “load in QGIS when done” automatically adds this raster layer to the map in the main window.

Finally, click the triangle play button to run the georeferencing and watch the progress bar progress.

After that, the new raster is added to the map, and as seen, the historical map raster lines up very closely with the coastline and lot vectors already there.

final map, with new raster and vector layers correctly aligned.

This post details another adventure using Scholar’s Workbench from METAscripta, this time using an icon of the Anastasis/Resurrection from the Harvard Art Museum. I was able to copy the IIIF manifest link from the Harvard Art Museum’s website and paste it into Scholar’s Workbench such that it was added to a new collection.

Unlike the manuscript I was working with in my last post about using Scholar’s Workbench, this image came with several annotations already embedded into the manifest. These seem to have been AI generated, and showcase the limitations of that technology at the current time. To understand the symbology used in the icon, requires some knowledge of iconographic conventions. Not being equipped with such knowledge, the computer generated annotations are rather humorous, such as this one struggling to explain the emotional state of King Solomon, who had been annotated as a 28-44 year old female.

AI's attempt at identifying King Solomon as a female, age 28-44 who may be surprised, afraid, happy, calm, confused, or angry.

Similarly, the inscription tells us this is an icon of the Anastasis (Greek for Resurrection), transliterated into Cyrillic lettering and the computer understandably had some difficulty making sense of it.

My annotations I hope clear up who and what the main characters and actions featured in the icon represent according to convention. In doing so, I used some different annotation options, including for many figures, the freeform tool that allows one to trace around a non-geometric shape, such as a person. Here are examples of this being used for Kings Solomon and David, John the Baptist, Adam, and Eve. After the shape is drawn, it can be adjusted by moving the points to better match the contours of the figure.

I also used this freeform tool around the mandorla (almond) shape surrounding Christ.

the mandorla representing the uncreated light that illuminates the darkness of Hades.

I also used the line tool to trace around the doors of hades broken under Christ’s feet, naturally in cruciform shape. This tool draws straight lines between points, making it well suited for this shape.

line tool being used to annotate the broken gates of Hades under Christ's feet.

For other features I used simpler shapes used earlier, like the circle here used to call out the binding of Satan, and the same tool being drawn into a long oval to explain what the inscription means.

The IIIF manifest with my annotations is: https://annotate.metascripta.org/manifests/905753465/11/3111ad025a0d9857bd0352b0893c0b25.json

In this post, I create some data visualizations in the form of maps using Tableau. The data set being used concerns rural areas and small towns in America and was found on data.gov. The research questions I explored using this data and Tableau’s map features concern whether or not rural and small towns across America largely share certain characteristics in communities and demographics, follow certain patterns, or are more randomized. The results of course show that this depends on the characteristic being questioned.

Opening the dataset in Tableau, I first chose the tables I wanted to work with. Creating the map only required dragging one of the components on the left that are geographic in nature (as denoted with the globe symbol) onto the detail button of the marks card. For this project I used the county data. At this point the map doesn’t tell us much useful information, only counties that have small towns, which includes most counties in the U.S. 

adding counties to the detail marks card to create a map visualization.
counties with rural and small town communities.

To make the map show us meaningful data, I added various cross reference to compare. These include showing comparing the numbers of education levels in these rural/small town counties. These bubble maps were formed by dragging the relevant data tables from the lefthand menu onto the map. 

map showing low education areas, these being most prevalent in southern states.

The resulting maps show a great disparity between northern and southern states when it comes to areas being determined to be low education. 

Other components show other patterns, such as this one showing which counties had the highest number of rural areas deemed to be farm dependent – predictably those in the farm belt. And another shows the more evenly distributed recreation dependent rural areas.

map showing farming dependent areas, clustered mostly in a vertical strip from the Dakotas down through Texas.
map showing recreationally dependent areas distributed across the U.S.

To play further with these maps and Tableau’s features, I changed the background map style by going to the map menu, and under background maps, picking the outdoors setting. 

changing the background map

Also, instead of relying on often deceptive bubble maps, I also explored some demographic features of these rural areas by dragging the relevant data tables straight onto the map. This resulted in providing the numbers of foreign born people living in rural areas and small towns in each county, visible when you mouse over the particular county you are curious about and displays all counties with the same size dot. 

numbers of foreign born people in Kalamazoo county, by continent or origin.

This post covers my first attempts at editing audio using Audacity. This is another project that follows a lesson from the programming historian and aims to produce a very short podcast. 

After installing Audacity, I imported in a track of music, which will become the intro music to the mini-podcast. The program displays this music track as waveforms representing amplitude over time. Zoomed in enough, it does take the form of a wave, though it doesn’t look like one otherwise. There are two waveforms, because it was recording in stereo.

The other track to include was as yet unrecorded, so I then used audacity to record my own voice, saying the simple line:

“This podcast is the product of doing this project learning to edit and record audio using Audacity.” 

To prevent Audacity from rerecording the music track, I muted that track while doing the recording. I also had to add a new track.

screenshot adding a new track

Once I ended the recording, the waveform of that was added right at the beginning, starting at the same time as the music. However, there were a couple things to do to clean up the new track before worrying about its placement. First was to get rid of extraneous silence simply by clicking and dragging to select the silences, made visible as a straight line, and pressing the delete key. 

The other item is in the category of transitions, as I added a fade in effect. I zoomed in on the recording to see the effect take place, I highlighting just the beginning of the wave, I selected “fade in” from the effect menu.

I then did the same process at the end of the recording, but selecting “fade out.”

To move the recording to a more appropriate timestamp, a few seconds in, I dragged the waveform horizontally over to where I wanted it in relation to the music track. 

Next was to get rid of the rest of the music, which again was a matter of highlighting the unwanted portion and pressing delete.

To produce a crossfade, once the two tracks where aligned suitably, both tracks were highlighted and “crossfade tracks” was selected in the effect menu.

both tracks highlighted an crossfade tracks being selected from the effect menu

Finally I exported the audio as an MP3 and inserted it here, at the end of this post. 

Scholar’s Workbench is a useful feature on the METAscripta website that allows annotation of IIIF images. This post documents some of my first experiments using Scholar’s Workbench, using Bodleian Library MS. Laud Misc. 250. The current binding of Laud Miscellaneous 250 combines two manuscripts, the first being a 14th century MS of 40 homilies on the gospels by Gregory the Great, the second being a twelfth century MS of the first 10 of John Cassian’s Conferences. It is the latter of these, and only one folio of it that needs particular introduction for the purposes of this post.

Upon registering for a METAscripta Scholar’s Workbench account, I looked through the listings of institutions using IIIF, which Scholar’s Workbench works with. Among these was the University of Oxford which includes the Bodleian Library and all their digitized manuscripts. Finding MS Laud Misc. 250, I followed the directions to copy the IIIF manifest link. This link was easily pasted into my first Workbench collection (of which I changed the name to John Cassian, Conferences), and the MS was there in the collection.

Once the manuscript was in my collection, clicking on the image brought me to the viewer where I toggled annotations on and started to annotate by selecting rectangles and circles, drawing such shapes around aspects I wanted to make note of, and entering text into the box that appears once such a shape is drawn.

I chose to do all of my annotations on a single folio – 112 recto – primarily for reasons of convenience. This particular folio is fascinating, for as noted in my longest annotation, it gives witness to the conferences being out of order, and not in a way of the quires being mis-ordered during binding, but that it was written this way. Conference 3 ends on this folio, and Conference 6 picks up directly from it, barely skipping a line between the two. 

The script is a transitional protogothic one, and several of my annotations reflect aspects of this, i.e. multiple forms of letters like d and s and fusion between double letters. 

The IIIF manifest link with my annotations is https://annotate.metascripta.org/manifests/905753465/11/5fa5cd96bfc33ee64c3162ae0d68c549.json

The Scholar’s Workbench seems thus far to be a good way to do annotations of manuscripts, and all the annotations can be viewed as a list, which is particularly convenient especially for displaying differences in letter forms. 

Earlier, as covered in this post, I discovered how to work with the QGIS program to build and edit maps. However, knowing how to change fonts and colors isn’t worth much without knowing how to create files that contain the map layers such as rivers or roads that had been pre-made for that earlier lesson. This post demonstrates how these shapefiles are created, again using a tutorial from the programming historian.

This project starts where the last QGIS one left off, with the map of Prince Edward Island. Because the new map going to be created looks at some differences over time, the first thing was inserting another historical map as another raster layer, this was done just as before. The new part is in creating three new vector layers, each of a different type. 

The basic steps for each are the same: 

1. In the menu bar, go to layer > Create Layer > New Shapefile Layer

2. In resulting “New Shapefile Layer” window: 

a. instead of simply typing in a file name in the box so labelled, click the ellipsis button next to it to not only name the new shapefile, but tell the computer where to save it.

b. set the CRS to match that of the layers you already have.

c. select the “geometry type” (dependent on the type of data that layer is to contain – points for things like towns, lines for roads or rivers, and polygons for things like regions or lakes.)

setting geometry type

d. add attribute fields – the type of and label of information you want linked to each point, line, polygon, etc. 

3. toggle editing and add points, lines, etc., finishing each with a right click which opens a box to add information to the attributes mentioned above in 2.d.

The first shapefile we create is the “points” type and will pinpoint cities and settlements, some of which no longer exist. The attributes here were settlement name, year (established), and end year (for those settlements no longer existing, when they ceased to exist). By default, new attributes are “text” unless otherwise specified, so for year and end year, whole number was selected as shown. 

Then, toggling editing on and selecting add point, one merely clicks where a settlement is/was, aided by the historical map rasters already in place, and fills in the name and year(s) attribute information in the box that pops up.

toggling editing

The second shapefile is to show historical roads and thus uses “line” geometry type. Here the attributes include name of road and year, again changing year to whole number input. 

setting type to whole number when adding attribute fields.

After turning on editing, and selecting add line, one places dots along where a road is, tracing roads from the historical map raster, right clicking at the end, which cues the attribute box to appear wherein name and year can be entered.

The third shapefile is to show the lots or districts the island is organized into, allowing us to use the polygon geometry type. Attributes are lot name and year. For rectangular lots, adding an item to the shapefile only requires clicking each of the four corners of the lot after toggling editing and selecting add polygon. 

Non rectangular areas are captured using “snapping” which is only a little more involved. First, I had to find the snapping toolbar, click the magnet to enable snapping, and go into the snapping options.

To be as accurate as possible, this snapping was done with reference to the modern coastline layer, changing the settings as described in the lesson. After closing the settings window, the new polygon is added much as the last one, only using more clicks to trace around the coastline.

showing the vertices of lot 38

Again, ending with a right click opens the attribute box and the polygon with its information becomes part of the shapefile. All the shapefiles created are saved wherever specified in step 2.a. for future use.

Accessibility in the online world is about adapting websites and digital resources in ways to remove any barrier from using them, and pays particular attention to simple details that would hinder people with various disabilities to make full use of such online resources. Universal Design keeps the same things in mind, but with a broader focus on the usability for all types of users, recognizing that many accessibility features are useful to people regardless of whether or not they have a disability. In this post, I consider and implement some ways to increase the universal design of this site.

One of the first things is something that is best done as one builds their site and writes posts, which is to make sure images have alternative text. I have done this haphazardly up to this point, and am now updating some of the posts in which I neglected to add alt text so that it does include this for the benefit of those unable to view the images I often include in my posts. This is done easily enough by selecting the image and filling in some text describing the image and its purpose in the alternative text box in the right hand pane.

Beyond adding alternative text, text size, contrast, font, etc. can be issues for those with vision issues, or in the spirit of universal design, anyone who doesn’t like squinting at their screen. To remedy this, I found and installed a plugin to add these features. One Click Accessibility adds a sidebar widget to the site to offer these helpful features.

Finally, part of universal design is not only focussed on accommodating disabilities, but ways to make online content more accessible in general, I took some steps to ensure that this site would display properly not only on the laptop screen I’m currently using, but also on tablets and mobile devices. WordPress lets you toggle which view you are currently working with to help you make sure things appear correctly.

toolbar with viewing toggles circled, phone view selected

This post documents further adventures using Omeka, this time to build a site and set up an exhibit therein.

Making the decisions on what to call the site and what it should be about was more difficult than actually setting up the site, which only required entering these details in the appropriate boxes.

Screenshot showing the Add a Site page on Omeka.net

Adding items to the site and entering metadata into the Dublin Core fields went smoothly, being a known process already discussed in an earlier post.

To install the required plugins, I only had to go to plugins in the navigation bar and then click install next to the ones I needed.

Screenshot showing the installation of plugins.

Next I started to build an exhibit from some of the items I had uploaded, this was begin by selecting Exhibits from the left hand menu. From here I could add an exhibit, give it a name and description, and start adding pages. I also added some navigation instructions here.

Screenshot showing the exhibit editing page

I decided to let each page I added cover some extent of time for which I could include at least a couple of the items I had added to the site. The breakdown I ended with is seen below. The pages had to be reordered to be in chronological sequence, which was merely a matter of dragging them into the right order.

Screenshot showing the ordering and adding of exhibit pages in Omeka.

Entering the editing for each page, I had to choose more titles, and then select what type of blocks I wanted to use to build the page. For this project I simply chose file. Adding each file block prompted me to choose which item(s) I wanted to appear there, and to add captions explaining what I wanted viewers of the exhibit to know about the items I grouped together in each page. Again, these blocks could be (and were) reorganized by dragging them into a sensible order.

To get the exhibit to be accessible from the home page, I added it to the top navigation bar, using the Appearance settings in the admin bar to find my way to where said navigation bar could be edited.

Screenshot showing the editing of the navigation options for the new site being built with Omeka.

As seen above in what I selected as a Homepage, I also added an Introduction page to the site using Simple Pages in the lefthand menu. This was again very straightforward, though I had to remember how to use HTML tags again. Here I explained what the site was about and how to use it to either view the exhibit or browse all items.

Screenshot showing the editing of an Introduction Homepage for the new site built with Omeka.

Tropy is a useful program for organizing images and this post details getting started with this application.

Installation was simple and straightforward, and upon opening Tropy, the first thing was to create a project. I started with a generic “Project 1” so I could see how it would work before I had an idea for a sample project. 

Tropy allows you to change the name of projects as you are working on them, simply by clicking on the title and selecting “rename project.”

As the new title suggests, this first project now centers around the biblical story of the healing of the paralytic at the pool of Bethesda found in John 5:1-15. I have various images (paintings, maps, models, etc) relating to this passage that are in a variety of different file formats. Entering them into Tropy is done via the highly user-friendly method of drag and drop. 

What makes Tropy so useful is that you can enter metadata for each image. Though Tropy has it’s own suggestions for metadata fields, it also has the option to use standard Dublin Core – which I used for this project. In preferences, this can be set to the default, so the first fields you see are those corresponding to Dublin Core categories.

Adding tags is done by simply selecting the item or items you want to assign a tag to and under the tab in the right hand panel there is a place to click and add such information. There’s even an option to customize tag colors. 



This post looks at two photogrammetry projects of churches: one is the chapel at Duke University and the other those found in the mapping gothic France project done by Columbia University. While both projects have impressive church buildings as their subject, and both use photogrammetric means to produce models, they differ in many ways.

In general, despite Mapping Gothic France being the larger project, there was more information readily available about the making of the Duke University Chapel project. The About page did not give much information as to how the Mapping Gothic France project was done, other than that it “builds upon a theoretical framework derived from the work of Henri Lefèbvre.” They also provide the source code on GitHub. An article in Duke Today explains that their project was done in a computer cluster program making a point cloud from 1430 photos taken in a two hour time frame. 

The stated purposes for the projects also diverge. The Duke University Chapel project’s straightforward goal is “to document the chapel at this unique point in its history” with the projected future utility of providing a model for any needed repairs. There is also the further use of this project as educational for those involved in putting it together, i.e. “to introduce photogrammetry to Duke faculty and graduate students.”

The Mapping Gothic France project, by virtue of it encompassing so many churches that were contemporaneous to one another, has the goal of providing “new ways to understand the relationship of hundreds of buildings conventionally described as ‘Gothic’ — in terms of sameness and difference, found in the forms of multiple buildings within a defined period of time and space that corresponds to the advent of the nation of France.” (from the About page on the Mapping Gothic France site). There is a much greater sense with this project that there is an underlying story as it “embraces not only the architectonic volume but also time and narrative” as it “seeks to establish linkages between the architectural space of individual buildings, geo-political space, and the social space resulting from the interaction (collaboration and conflict) between multiple agents.”

When viewing the projects, the information available is far greater with Mapping Gothic France, where one can find not only the 3D model but other information, floorpans and photos – both historic and modern. However, clicking on these other resources did not reveal any more metadata about them. With the Duke chapel, the finished 3D model is all one sees.

Skip to content