The event, which was held in June 2020, was aimed at researchers, students and professionals working with museum and archive collections, digitalization and/or research strategies. The aim was to provide examples and advice on using metadata for research and outreach, inform about standards and practices regarding metadata, and highlight the benefits of heritage institutions collaborating with Academia in enriching collection metadata.
Yannick de Raaff, Groningen Institute of Archaeology, University of Groningen
In this blogpost I would like to illustrate how and why we have applied digital techniques (photogrammetry and Virtual Reality) to solve a specific archaeological problem. Our study concerns the architecture of a Bronze Age (early Mycenaean; ca. 1700-1420 BCE) tomb from the North Cemetery of Ayios Vasileios, Lakonia, Greece. This particular tomb (called Tomb 21) is roughly rectangular in shape (inner dimensions ca. 2.26 x 1.33 m), and was filled to the edge with a large mass of some 200 stones, evidently the remains of a roof (figure 1). However, even after the complete removal of all the stones, careful recording and the excavation of inhumations underneath we were unsure about the original shape and construction of the roof. How would this hodgepodge of stones once have formed a cover, and how had it collapsed? Rebuilding the roof in real life was not possible, since the tomb had been backfilled after the excavation was completed. Therefore, we decided to address this issue by using digital techniques.
With the financial help of several grants we started working together with the Virtual and Augmented Reality experts from the Reality Center of the University of Groningen. A VR-environment was created in Unreal Engine using a surface model of the empty tomb, and 3D models of the collapsed stones were made and added to it. Both the interior of the excavated tomb and the stones were modelled with Structure for Motion (Sfm; also called photogrammetry). The VR-environment offered us three main advantages. Firstly, we could scrutinize the still standing walls of the tomb from any angle, even after completion of the excavation. Secondly, by carefully studying the various photographs and videos of the collapsed stones as they were being excavated, we could approximate their position within the tomb and place them in that position within the VR-environment. That allowed us to better understand the relative position of the various stones within the tomb and vis-à-vis each other, and thus the collapse (figure 2). Lastly, the stones could be restacked interactively in an attempt to recreate/approach the original appearance of the tomb’s roof and explore which types were likely and which were not.
After the modeling, programming and the gathering of parallels of contemporary tombs, it was finally time to put on the VR-goggles, strap on the controllers, and start with our life-size three-dimensional puzzle. Weeks were spent in the virtual tomb, labouring under a virtual Greek sun, grabbing stones with the controllers, moving them around, putting them in place, pressing the ‘save’ button, and trying again. After many attempts, we were able to confidently refute a number of designs – most likely, the tomb was first covered by a series of beams on top of which were placed first the largest slabs (these were found deepest inside the tomb and must therefore have fallen down first), and then the remainder of the stones were placed on top, creating a cairn (figure 3). Judging by the way the stones had fallen down, it seems likely that the beams broke and caused first the slabs and then the rest to tumble inwards. Instead of merely hypothesizing about the tomb’s cover and its collapse, the digital techniques have made it possible for us to test hypotheses in a structured and argued manner. Virtual Reality was in a way used to perform experimental archaeology in a digital environment. (For further information, see the embedded video below.)
The project is included in the DIG IT ALL-exhibition of the University Museum (University of Groningen), and so is the video added below this blogpost. More information on the exhibition can be found on the website: https://www.universiteitsmuseumgroningen.nl. The exhibition is part of the centennial celebrations of the Groningen Institute of Archaeology and includes various other archaeological projects that have included innovative digital techniques. Our project has been presented so far at various conferences (click link for PowerPoint presentation), and a scientific article will appear in the proceedings of the Lakonia conference.
The excavations at the Ayios Vasileios North Cemetery are directed by Sofia Voutsaki, as part of the Ayios Vasileios Project, directed by A. Vasilogamvrou, Director Emerita of Laconia Ephorate, and carried out under the auspices of the Archaeological Society at Athens. Our thanks go out to the Groningen Institute of Archaeology (GIA), the Ammodo Foundation, the Institute of Aegean Prehistory, the Mediterranean Archaeology Trust and the Centre of Digital Humanities. This reconstruction project was a collaboration between archaeologists from the GIA (Yannick de Raaff, Sofia Voutsaki, Theo Verlaan and Gary Nobles) and staff of the Centre for Information Technology, interfaculty V / AR hub (Gert-Jan Verheij, Frans van Hoesel and Pjotr Svetachov.
The conference “Integrating Digital History, 3rd Digital History in Sweden Conference (DHiS2020)”, has published a Call for Papers. The deadline is on the 2nd of October 2020. Read more on the conference site!
I started out as an Assyriologist*, specializing in the world of cuneiform, a script in broad use across much of the Middle East from c. 3,300 BC until c. 100 AD. Not too many are studying it, and relatively few of that exclusive lot were doing much with a computer. Later, for my doctorate, I ventured into landscape archaeology, a field which had next to nothing to do with cuneiform, something my doctoral supervisor gently pointed out to me when he said that I had committed quite a career suicide by coming over. At least in the short term. For the first year, I locked myself into a basement trying to learn the basics of GIS and remote sensing, with a whole bunch of nerds who knew nothing whatsoever about cuneiform, but very much about exotic abbreviations such as GPS, GCPs, CRS, LIDAR, UAV, and DEMs, and compound curiosities like georectification, least-cost-pathways, viewscapes, panchromatic, and shapefile. I found myself constantly returning to my supervisor’s office half a day after each meeting we had, to say that we fundamentally misunderstood each other concerning whatever interface between philology and landscape archaeology we had last discussed.
Gradually,
however, I also realized that each of those meetings produced something
genuinely new because, being so wildly unfamiliar with common knowledge in each
other’s fields, we constantly found ourselves asking new questions, addressing
new issues, seeing new perspectives in matter otherwise all told and accounted
for. A lot of this came straight out of the elementary yet stimulating
challenge of how to integrate data and knowledge from very different fields of
research within a common frame. And that’s what I found so immensely
fascinating about digital fora in an academic setting, then and now. That subtle
ability of a shared methodological or technical forum to bring different
perspectives, different minds, different disciplines together, oftentimes for
purposes that were rarely immediately clear to anyone involved. Thus, my
doctoral dissertation came to deal with a very novel combination of
quantitative approaches to data contained in a couple of thousand
administrative texts – and settlement patterns as derived from spatial data
sets that the nerds in the basement had been playing around with for years. The
core argument was that, when properly scaled within a formal and coherent
analytical frame, ancient texts could indeed be brought to bear on conclusions
derived from satellite imagery and site mapping at a regional level. In my case
casting some decent, and empirically solid, light on the material size of early
state economies.
My current work at Uppsala University, and my engagement with Digital Humanities Uppsala, is very much a result of those early exposures to very different ways of studying and approaching the past. Our brand-new research project at Assyriology, funded by Riksbankens Jubileumsfond and called Geomapping Landscapes of Writing, or GLoW for short, aims to trace the distributional and compositional outlines of a discrete historical record, namely the cuneiform corpus, in its entirety. Counting perhaps half a million known texts, spanning a temporal transect of some 3,000 years, and traversing a geographical area reaching from the central Mediterranean to the eastern deserts of Iran, this corpus offers an opportunity to study the role of writing in early human history, its material distribution and composition, from a birds-eye perspective. Such an undertaking would have been unthinkable a decade ago, and its methodology owes as much to landscape archaeology as it does to cuneiform studies. Related ideas underpin the research network TextWorlds: Global Mapping of Texts from the Pre-Modern World, another new initiative that I am involved in, funded by CIRCUS, which will bring together philologists, archaeologists, and historians from across the university to discuss and explore shared traits between written corpora from around the world. A crucial basis for such initiatives is the ability to work from a shared, read digital, methodological platform able to capture vast amounts of data from seemingly disparate disciplines, each with their own way of approaching and studying the past, and oversee it within a shared frame of scholarly inquiry. So, over the next three years, I am looking forward to spending even more time lingering over applications, programs, and code with researchers from across the humanities and social sciences, debating, experimenting, and discovering new ways of fusing together our various specialties. From past experience, I would say that fora like Digital Humanities Uppsala are perfect for that sort of activity.
* Don’t worry if you don’t know what that is. Back in the pre-digital age, one of my teachers had been enrolled at Psychology for most of his undergraduate years because the university administration didn’t know either.
Maureen Ikhianosen Onwugbonu and Aikaterini Charalampopoulou, students at the DH Master’s Programme, recently published reports on their internships at Museum Gustavianium.
By Dr. Kerstin B Andersson, Dept. of Linguistics and Philology, Uppsala University / Swedish Council of Higher Education
I’m a media anthropologist and Indologist, currently working on the Indian diaspora and communication. During the last couple of years, I have focused my research on the distinct field of migration and the use of new media and social media, a growing academic field.
The impact and importance of the new technologies for migrants is well established. Appropriation of ICTs and new media environments have become a ubiquitous feature of everyday life in migrant groups. The development of the research field is closely related to the expansion of ICTs and new media. The first studies dealing explicitly with the field of migration and new media appeared in the end of the 1990s. Now, it has become an established academic research field.
Academic research in the field of migration and the use of new media is interdisciplinary, drawing on approaches from a number of subject areas, such as anthropology, migration studies, media and communication studies, studies in new science, Internet studies, sociology, and cultural studies. The research area is understudied, characterized by rapid changes and shifts, and is shaped by the changing structural conditions of migrants and the proliferation of forms of media. For example, the 2015 European refugee crisis led to a number of studies on the impact of new media on forced migration.
In a recent article, I provide a comprehensive overview of the rapidly expanding academic field of migration and the use of new media. So far, the research field has been characterized by an increasing number of empirical case studies on the use of new media in migrant groups. Through a review of the existing literature in the field, I provide an inclusive narrative synthesis of the academic field. The result is presented in the form of a narrative literature review, where I elaborate on the status of the research field, the primary themes and topics of research interest, the theoretical and conceptual issues under investigation, and the methodological approaches to research in this field.
By Amalia Juneström, Department of ALM, Uppsala University
My name is Amalia, and I’m a PhD student from the Department of ALM at Uppsala University. Thanks to a grant from Riksbankens Jubileumsfond – The Swedish Foundation for Humanities and Social Sciences – which allows researchers to attend the annual summer school for the digital humanities at Oxford University, I had the privilege of participating in the week-long summer school in Oxford earlier this summer.
When I left for Oxford, I had already been involved in the planning of our own new international and interdisciplinary digital humanities master’s programme, which will start in our department this autumn. It has been an enjoyable experience, and I was looking forward to participating further. However, although I was well aware of the increasing role played by tools and techniques from the digital humanities field within my own discipline, my relationship to them had so far been tangential. To tell the truth, my own experience of many of the new computer-based techniques used within both my own field and the digital humanities had been one based on a mixture of fascination and trepidation. In short, I felt an urgent need to broaden my understanding of this knowledge domain; the opportunity to participate in a one-week introduction course was therefore much appreciated.
The summer school offered a variety of strands providing insight into different domains of knowledge within the digital landscape. In order to improve my general understanding of the digital humanities, I chose the strand ‘An Introduction to Digital Humanities’. In terms of participant numbers, this strand turned out to be by far the largest within the summer school, and it was well suited for those who, like me, wanted to better acquaint themselves with the tools and methods found within the interdisciplinary field of the digital humanities. Unlike the other workshop-based strands, which offered hands-on practical training in the techniques and tools of each course, the strand that I chose was mainly lecture based, making it well suited to beginners. By drawing on expertise from many different fields, the lectures offered insight into a range of research approaches embraced by the digital humanities.
During the five days of the summer school, we checked out a selection of research scopes such as text mining, digital archiving and musicology. I think that everyone who participated found it useful to go through such a wide variety of topics, digital tools and methodological spheres of application. All in all, I found the selection of themes and topics at the summer school very well organised and rewarding. Also, I am truly convinced that location and setting can have a great impact on the outcomes of learning, and what location could be better for acquiring new knowledge than Oxford, one of the world’s most famous centres of learning? But even if you don’t believe there’s a connection between location and successful learning, the historical setting made the experience highly memorable, and I really appreciated our accommodations in the romantically Victorian red-brick Keble college, whose historical atmosphere was reminiscent of Brideshead Revisited.
By attending the summer school, I definitely acquired a better understanding of some of the research methods and techniques which are important within my own research field and which are of interest to my own academic journey. I would like to thank Riksbankens Jubileumsfond for the opportunity to take part in the summer school, and I encourage everyone who has an interest in the digital humanities to check out the programme for next year’s summer school and apply!
By Ylva Söderfeldt, Department of History of Ideas, Uppsala University
Patient organizations are today available for any illness. They range from small, informal self-help networks to large, well-funded associations. Although they possess quite significant power, as intermediaries between patients and healthcare providers for instance, or as lobbyists in the political sphere, they have been mostly ignored in historical research. We know very little about how these important stakeholders in healthcare emerged and evolved.
An ongoing pilot study at Uppsala University now tries to develop methods that could make a comprehensive history of the patient movement possible. With the generous support of the Kerstin Hejdenberg scholarship from the Swedish Asthma and Allergy Association, the association’s member journal Allergia is currently digitised. Together with Karl Berglund, University Library, and Matts Lindström, Digital Humanities Uppsala, we are currently analyzing the material with text mining tools.
The purpose is to see in what way we can measure changes in vocabulary that are significant to discursive transformations regarding allergy. From previous research, we know that the second half of the twentieth century saw substantial change to the illness concept on both a medical and cultural level, but the question is if this is reflected in the publications of the patient organization – and if so, how can we best define and measure them quantitatively?
In the short term, results from the project are expected to offer important insights about the history of asthma and allergies – some of the most prevalent diseases in our present society – and the role that patients themselves played in defining their illness. But the project also has a longer term goal: by going through all stages from digitization, via pre-processing to analysis, we gather crucial experiences in how to make the move from analog to digital history. We test and evaluate methods and work modes for text mining of a relatively small, inconsistently structured corpus, with research questions that relate to history of knowledge. Without doubt, the experiences we gather will be helpful for other researchers that face similar challenges.
By Karolina Andersdotter, Digital Methods Librarian, Uppsala University Library.
In the end of May 2019 Uppsala University was appointed Cooperating Partner of DARIAH EU. DARIAH stands for Digital Research Infrastructure for the Arts and Humanities and is a pan-european infrastructure for arts and humanities scholars working with computational methods.
DARIAH EU consists of 17 member countries as well as several cooperating partners in eleven non-member countries – including Sweden. Uppsala University is the second Swedish institution to join, following Linneaus University. Together with Linneaus we now aim towards forming a national consortium for infrastructure for a full membership in DARIAH. We are further in conversation with the centres for digital humanities at Umeå, Lund, and Gothenburg Universities.
The initial commitment as cooperating partner is for two years and via the administration of DH Uppsala. Through communication between UU researchers and the DARIAH ERIC Virtual Competence Centres UU aims at knowledge exchange on linked and open collections and data, content management and storage of research data, enhancement of digital scholarly tools, and digital research infrastructures, environments and standards.
Aforementioned points are key issues identified in the current draft of goals and strategies for Uppsala University; this cooperation can help us towards the goals of first class digital research and education infrastructures, open science, and safe and open storage of and access to data.
Knowledge exchange through the VCCs is expected to develop and improve the research support services provided by the university library, thus making an impact for all researchers in need of digital support through the scholarly life cycle.
Bill Kretzschmar is Harry and Jane Willson Professor in Humanities at the University of Georgia and is a visiting professor at the University of Oulu in Finland. He edited the American Linguistic Atlas Project for 34 years, the oldest national research project to survey how people speak differently across the country, which led to his preparation of American pronunciations for the online Oxford English Dictionary. He has been active in corpus linguistics, including work on tobacco industry documents. He has been influential in the development of digital methods for analysis and presentation of language variation, including applications of complexity science.
Introduction
In May, Bill Kretzschmar visited Digital Humanities Uppsala in collaboration with the language GIS research network to deliver a speech for the DH Uppsala Seminar Series. Kretzschmar addressed the theme of sustainability in an institutional setting and proposed collaboration with the university library as the only realistic option for long-term sustainability – drawing upon his long experience at the University of Georgia (and the Digital Humanities Laboratory, or DigiLab, more specifically). A video recording from the seminar is available here (in progress).
We also used the occasion to conduct a short interview with Bill where he among other things touches upon the technological history of early DH (humanities computing) as experienced from his perspective as well as the matter of sustainability. It is published below.
Interview
Could you talk to us about the transformation of The Linguistic Atlas Project from a printed publication to its early digital versions – especially in relation to the material and technological conditions that surrounded this process?
When I first started work on the Linguistic Atlas Project in 1977 (!), as a graduate assistant, the whole project that I could see was on paper. The team of graduate assistants was working on recopying some of the field records, and I was using a typewriter to prepare camera-ready copy of some of the field records for publication in the University of Chicago Press series of fascicles of the Atlas of the Middle and South Atlantic States. There were a few audio recordings on reel-to-reel tape, but we weren’t working with those at the time. Our very first step in making the paper records digital was a grant from the National Endowment for the Humanities to start a database of responses; at that point, in 1983, I made the decision to use PCs instead of the university mainframe as something we could control ourselves and because of the new availability of a Winchester hard drive (all 10Mb its storage). After I took over the Atlas in 1984, we were still working with paper records and my first task was to invent the digital technology necessary to prepare camera-ready copy for the fascicle series, so I learned about type founding and created phonetic fonts that we could see on the computer screen using a special graphics board that let me design phonetic characters in “high ASCII” (codes 128-255) and print them out as dot designs using the newly available Hewlett Packard laser printer. About that time the University of Chicago Press cancelled our print publication contract, so these methods were just used to produce the camera-ready copy for our Middle and South Atlantic Handbook. I could use the new publication system to make camera-ready copy for the Journal of English Linguistics, which I edited at the time and had printed privately until the mid 1990s. Also in the late 1980s, I taught myself how to use the RBase database system because the programmer for our earlier NEH grant failed to get it to work, and designed the database structure for the Atlas that we still use today. We got another NEH grant in the early 1990s to keyboard Atlas data–and found out that it was just too time consuming and expensive to enter massive amounts of phonetic data–but we completed entry of about 15% of the data and that was enough to launch the whole digital process. That digital data allowed all of our new developments with GIS: interactive GIS for that data first on Macs and later the Web, and then applications of technical geography like spatial autocorrelation, density estimation, and Kohonen self-organizing maps.
As is exemplified by your previous answer, the particular needs and conditions for humanities infrastructures will always be in flux – and of course other factors than technology (institutional, political, financial), play an equally important role. What is your impression of current conditions within American and European academia?
When I started working with computers on humanities tasks in the 1980s, some people were using mainframes but we decided to use separate PCs because we could control them ourselves, and not have to wait for our low-priority programs to run in the middle of the night, and because we could get at least some mass storage. But my work with Linguistic Atlas data always pushed the limits of storage, and of processing once we started using statistics. We were noticed by the U of Georgia Computing and networking Service when we tried to run statistics on their mainframe, and used more computing time than they realized we needed. While personal computers will always be our choice for data entry and writing/running small programs, we now have to have larger infrastructure for our large data sets and for Web distribution of our materials. This means cooperation with units in the university that manage bigger infrastructure than we can run in the office. It took a long time for us to realize that our natural partner was not the computing and networking unit at the university, but instead the university library. Many of our colleagues in the sciences need ultra fast processing, and that is what our Georgia computing administration has provided. But what we need in the humanities is the ability to create interactive programs to store and present great masses of information, and the library is the unit of the university whose mission it is to do that. My grant resources have helped the library to create the infrastructure that we need for the Atlas, and the library has gone even further to make such infrastructure available to others in the humanities. In Europe the situation is very uneven. I have heard of impressive humanities computing networks in Germany and Norway. Not so much, yet, in the Nordic countries even though there are lots of great digital humanities projects in Finland, Norway, and Sweden. Support for the digital humanities in England seems to have declined, for example with the demise of its digital humanities institutional organization and the removal of digital humanities from the Oxford Computing and Networking Service to the Bodleian Library. In Eastern and Southern Europe (perhaps with Italy as exception) the situation is much worse. The bottom line is that those of us in the humanities really have to have institutional partners for infrastructure. We cannot sit by ourselves with laptops in our ivory towers.
While cooperation is essential for building digital research infrastructures for the humanities, there is also always the difficult question of sustainability and maintenance. What happens after a project is finished – and what can we do to keep the digital resources sustained?
The end of projects is inevitable. All of our digital humanities developments begin with smart people who dream them up and find a way to implement them. But those smart people are usually not followed by people quite as smart, or at least not as interested in the aging projects as the original inventors. When projects lose their momentum there simply is no current way to keep them alive, even as not-working images instead of interactive programs. I have tried to plan for this on the Linguistic Atlas Project by creating a maintenance-free part of our site, the Data Download Center, a file structure from which users can download all of our data. Our interactive elements in the Web site will eventually go dark when there is nobody to maintain them. Our partner the library cannot afford to pay people to maintain our site. That’s been my biggest job over the decades, not to invent the tools and sites, but to find money consistently to pay for their maintenance and development. While we do not have access to as much grant money as natural and physical scientists do, not to mention medical professionals, there has been enough money for me to keep the office open for decades. But when I retire, none of that money will be coming in. The best we can hope for is that maintenance-free portions of sites will still be available even after the fancier parts that need maintenance have failed. Maybe this is a gloomy prediction, but at the moment I do not have a better plan.