Nikolaos Gkizis Chatziantoniou describes his internship with the CDHU working on the projects SweMPer and Quantifying Culture
During my internship at the Centre for Digital Humanities at Uppsala University, I worked both on digitizing physical books for the project “Swedish Medical Periodicals – (SweMPer)” as well as pre-processing, cleaning, and labeling images for the project “Quantifying Culture”.
Regarding the digitization project SweMPer, we had to create digital copies of Medical Periodicals, as a part of enriching medical history archives. The method used was destructive scanning, which is much faster and cost-efficient than other methods, partially sacrificing the book’s integrity but not the contents. The spine of the book is removed with a guillotine and the pages are scanned, digitized, and transcribed using Optical Character Recognition (OCR). The remaining physical pages are archived in case we need to rescan them.
The “Quantifying Culture” project is an effort aimed at using Artificial Intelligence (AI) within the cultural context, as well as reviewing existing methods. The task here was to train an algorithm that automatically classifies gender in a collection of digital images. During the data collection stage, we used a web scraper to download images from the World Culture Museum to create the datasets. Afterward, I manually cleaned and grouped the images in order to create the training data for the algorithm. Additionally, I had to experiment with ideas on how to automate parts of the process and think about solutions to the problem of overlapping datasets.
My involvement in those two projects gave me a clearer view of the definition of a digital humanist and its work within the cultural sector. Solving practical problems, automating processes using code, and ultimately thinking about the implications of the methods used regarding costs, and social or ethical issues that may arise, are all parts of the digital humanist’s identity.
In more detail, regarding the digitization project SweMPer, I had to think about the implications of fast digitization versus a slow more detailed one, in relation to funding, available resources, and the project’s timespan. As for the Quantifying Culture project, I got a glimpse of the inner workings of the creation of an algorithm that classifies gender. I realized that the most important part of the process is actually not the algorithm itself but rather finding the right training data, removing biases, consulting experts regarding the classification of certain ethnic groups, and thinking about the implications of choosing a binary model to label the images.
The role of the digital humanist is not only working around the digitalization of culture as a coding humanist. This role becomes clearer when a project lacks social sensitivity, is focused on the data instead of the human factor, or removes diversity in favor of clear and uniform results. The digital humanist involved in those projects reintroduces those elements to the research and changes the focus towards a more human and socially aware AI.
In the following post, Dr. Michaela Vance writes about her pilot project undertaken with the CDHU–an author attribution study of Frances Brooke’s libretto, Marian.
About six months ago, in a moment of unbridled optimism, I set out to investigate whether the second, performed version of Frances Brooke’s libretto Marian was actually and truly written by her, or if the changes to the same could be reasonably assumed to be the work of a second (unknown) writer. Since then, I have come to have a greater appreciation of the difficulties involved in attributing authorship to short texts. In the following, I will give a short account of a workshop on the subject, hosted and organized by the Centre for Digital Humanities at Uppsala University, following my successful pilot project application to the same.
In preparation for the workshop I made a corpus of texts of a similar era and genre, against which we could run the two librettos in question. These included some of Brooke’s earlier texts such as the libretto to Rosina, the tragedies Virginia and Siege of Sinope, and sections of her novels. In addition, I included a selection of texts by other writers, such as Dibdin’s Shepherdess of the Alps, O’Keefee’s The Wicklow Mountains, and Bickerstaff’s Love in a Village, and Hanna Moore’s tragedies Percy and Fatal Falsehood. An initial problem was the OCR quality of these texts – firstly, because there exists no OCR .txt copy of the manuscript of Marian, and secondly, because the automatic OCR renderings of old printed texts are often far from ideal. However, I managed to make a decent OCR copy of the manuscript version of Marian with the help of Transcribus, and Ekta Vats made an brilliant OCR copy of the printed version of Marian. The rest of the texts were found at the excellent open source project ECCO-TCP, which hosts a large collection of SGML/XML-encoded texts.
Having addressed the OCR issues in the week before the workshop, project coordinator Karl Berglund, research engineer Marie Dubremetz and I set out to get some initial results in Stylo (run in R) when we met at Uppsala. Trying first Cosine Delta, the texts clustered based on stylometric similarity. As can be seen in Figure 1 this worked rather well in many ways, but it did not give enough of an indication that Marian differed to any substantial degree from Brooke’s other texts in terms of authenticity. We moved on to the second method, “rolling classify”, but here we ran into issues repeatedly due to the text in question being too short. Intriguingly, both methods cluster the texts by women (Brooke and More) together, while the texts written by men are quite clearly clustered on a separate branch.
In the next step, we tried to tackle the issue of the similarity of the two librettos by distilling the later version of Marian into a document that only contains the changes – a kind of super edition here called “disputed Marian new additions cleaned”. Having tried a variety of programs to help me spot and separate changes to the newer version of Marian, I finally gave up and did it by myself by just placing the versions next to each other and marking each change by hand. Given the very limited length of the texts and the fact that the librettos were interspersed with songs, we had low hopes of getting clear results. Nevertheless, as can be seen in Figure 3-5, some interesting results came out of the new Stylo effort. First, we can see that the variation with which the texts are grouped depending on method is quite notable, but also, that the women writers continue to consistently group together. Second, in figure 4 the distilled document with all of the isolated changes to Marian is further away from the manuscript version of Marian than Rosina is. This is intriguing, as it indicates greater variety of style between the two versions of Marian than between two entirely different librettos by Brooke. Caution needs to be exercised, however: the texts cluster on the level of author probability rather than text-specific features as such, and both versions of Marian and Rosina are situated on the same branch. Getting less ambiguous results would require further testing, with even greater attention to stylistic, syntactic, and morphological data, especially as we cannot speculate as to who might have made the changes to the original manuscript, and therefore do not have access to a reliable selection of comparison texts.
I went in to the project feeling fairly certain that Brooke had not made the changes to Marian herself, but now, even with such uncertain results, I find myself more open to the idea that she did indeed make those changes. Part of this has to do with the level of attention I paid to the smallest details of the two scripts as I prepared them for the Stylo experiments. The method encouraged me to consider aspects of the changes that I had not really registered before – an incidental but positive aspect of stylometry that, somewhat ironically, brings it back to the methods that predates the computer. I hope that I will be able to discuss the changes and what they might mean for how we understand Brooke’s ideas about class and genre in a different outlet in a not-too-distant future.
If you would like to read more about stylometric methods and short texts, I recommend checking out “The Dynamiter” project at http://thedynamiter.llc.ed.ac.uk/ and three articles, Hirst and Feguina’s “Bigrams of Syntactic Labels for Authorship Discrimination of Short Texts” in Literary and Linguistic Computing (2007), Gorman’s “Author identification of short texts using dependency treebanks without vocabulary” in Digital Scholarship in the Humanities, and Corrinne Harol, Brynn Lewis and Subhash Lele’s “Who Wrote It? The Woman of Colour and Adventures in Stylometry” in Eighteenth-Century Fiction.
The Neoliberalism in the Nordics programme, funded by Riksbankens Jubileumsfond, met at the Sigtuna Foundation in February 2022 with 20 participants from the Nordic countries and from similar research programs in the Netherlands and Germany. For three days, research presentations were interspersed with methodological discussion and joint work on research timelines, source texts, and stakeholder-networks.
Karl Berglund, Research Coordinator for the Center for Digital Humanities Uppsala, was invited to the meeting. Karl gave an introductory lecture on digital conceptual history and text analysis, which was an excellent basis for our further discussions. Neoliberalism in the Nordics is not based on digital text analysis, unlike for example the exciting Market Language project run by Leif Runefelt and Henrik Björck. Instead, a key point in our work is that the history of neoliberalism is multifaceted, spans several different historical stages, and over a large number of mutually different materials, actor-constellations, and genres.
The program is also based on extensive archival work, for example in the business archives that exist in Sweden and Denmark. Analyzing this complex and heterogeneous repertoire digitally entails great challenges, as neither materials nor languages are uniform and comparable. In the initial phase of the program, we therefore opted out of digital methods and the main method in the program is archival research and concept history. At the same time, there are good opportunities to build a smaller corpus to help analyze the larger issue, and it might also be possible to use, for example, the digital parliamentary material, or daily press material, which is available for all the Nordic countries to make trans-Nordic comparisons. Challenges, however, include comparability over time and across language boundaries.
Another difficulty is that the word ‘neoliberalism’ as such is not very useful as a keyword; digital analysis needs to focus on understanding, for example, constructions such as market-freedom, or natural law-growth. Since none of the participants in the program are experts in the field of digital methods, it was an invaluable help to talk to Karl, and during the spring we are considering the possibility of initiating a pilot project with the Center for Digital Humanities.
Programmet Nyliberalism i Norden, finansierat av Riksbankens Jubileumsfond, möttes på Sigtunastiftelsen med 20 deltagare från Norden och från liknande forskningsprogram i Nederländerna och Tyskland. I tre dagar varvades forskningspresentationer med metoddiskussion och gemensamt forskningsarbete kring tidslinjer, källtexter, och aktörs-nätverk.
Till mötet hade vi bjudit Karl Berglund från Centrum för Digital Humaniora i Uppsala. Karl gav en introduktionsföreläsning till digital begreppshistoria och textanalys, som var en utmärkt grund för våra fortsatta diskussioner. Nyliberalism i Norden är, till skillnad från exempelvis det spännande Marknadens språk-projektet som Leif Runefelt och Henrik Björck driver, inte baserat på digital textanalys. En huvudpoäng i vårt arbete är att nyliberalismens historia är mångfacetterad, spänner över flera olika historiska skeden, och över en stor mängd inbördes olika material, aktörskonstellationer, och genrer. Programmet bygger också på ett omfattande arkivarbete, exempelvis i de näringslivsarkiv som finns i Sverige och Danmark. Att analysera denna komplexa och heterogena repertoar digitalt innebär stora utmaningar, då varken material eller för den delen språk är enhetliga och jämförbara. I inledningsskedet av programmet valde vi därför bort digitala metoder och den huvudsakliga metoden i programmet är arkivforskning och begreppshistoria. Samtidigt finns det goda möjligheter att bygga mindre korpus som hjälp att analysera den större frågeställningen, och det skulle eventuellt också vara möjligt att använda exempelvis det digitala riksdagsmaterial, eller dagspressmaterial, som finns för alla de nordiska länderna för att göra transnordiska jämförelser. Svårigheter är är jämförbarhet över tid och över språkgränser. Ett problem är också att ordet ‘nyliberalism’ som sådan inte är särskilt användbart som sökord, utan digital analys behöver inriktas på att förstå exempelvis konstruktioner som marknad – frihet, eller naturrätt – tillväxt. Eftersom ingen av deltagarna i programmet är expert inom området digitala metoder var det en ovärderlig hjälp att prata med Karl, och vi funderar under våren på att initiera ett pilotprojekt med Centrum för digital humaniora.
CDHU is a member of three research infrastructure consortia that were awarded by the Swedish Research Council (Vetenskapsrådet): SveDigArk, led by Archaeology at UU, HumInfra (led by HumLab at Lund University), and Infravis (via the Centre for Image Analysis at Uppsala University and led by Chalmers University of Technology). In the following post, Director of the centre, Associate Professor Anna Foka describes the particulars of CDHU’s involvement in large national infrastructure consortia and the importance for developing humanities and social sciences infrastructure at a national level.
Uppsala has a brilliant tradition in historical-philosophical studies since at least 17th century, and the Heritage Law saw the formation of the first antiquities department here, the predecessor to the National Heritage Board of Sweden (Riksantikvarieämbetet). The first professor of Archaeology in Sweden was also at Uppsala, Oscar Almgren, in 1914. So, it should not come as a surprise that Uppsala is becoming involved in such national infrastructure projects. But why is this happening now?
One factor is the maturity of resources; there is now a Centre for Digital Humanities at Uppsala University, awarded with 30 million kr for the next five years and supported across disciplinary domains (HumSam, TekNat) and faculties (HistFil) which additionally supports these infrastructure initiatives from within by providing both socio-technical resources and providing support and access to a supercomputer (Uppmax) and a stable open cloud infrastructure (Central IT).
This means that we can now financially support these nationally awarded infrastructures. The SveDigArk application led by our extraordinary colleagues at Uppsala Archaeology is supported by us via a Geographic Information Systems (GIS) expert. We are further a part of HumInfra: the National Digital Humanities Infrastructure led by Humlab at Lund University; there our role as CDHU and within our module is to support artificial intelligence (AI) training, methods and tools for the humanities and social sciences, as well as to connect to European such infrastructures from the perspective of information science and information organisation. Our mission is develop our AI Laboratorium within CDHU. For InfraVis: The National Scientific Visualisation infrastructure led by Chalmers in Gothenburg, we are tapping onto another excellent resource: the Centre for Image Analysis at Uppsala University to experiment with the latest trends in scientific visualisation methods and tools.
A second point is that we, academics, researchers, scholars, are currently more aware of how fast-pacing technology leads to organisational change, which then leads to new scientific discoveries and vice-versa. Having the opportunity to study those phenomena at a national level is also a consequence of technological development.
A final factor is how we, global academics of the socially and environmentally challenging 2020s, seem to comprehend the importance of collegiality and collaboration for global impact. Researchers in any discipline are now commonly called upon to correspond to research questions that are not necessarily compartmentalised in disciplines or strictly bound to one geographical region, but are grounded in the complexities and relations of the real world and are meant to have global impact—in other words, to be both transformative and generative for humanity and society. This is what CDHU hopes to inspire, beyond digital methods and tools that we can all parse and process.
For research in the long run, these three projects mean that, across humanities and social sciences within UU and in Sweden more generally, we will together pioneer a number of trending digital methods and tools for research: namely Artificial Intelligence (such as image processing, natural language processing, machine learning) as well as Data Science, GIS, and Scientific visualisation more generally. It also means that we as researchers will be given the possibility to discuss these incredible new technologies within national clusters and complement each other’s findings, thus bringing Sweden to the forefront of international research.
We are happy to begin Autumn 2021 by welcoming several staff members to the CDHU! The research engineers, coordinators, and staff at the Centre for Digital Humanities work together offering consultation, practical, and technical support for world-class research that integrates digital methods and tools.
Our staff can support different stages of the research process—from research design to obtaining data, statistical analysis, data visualization, and more.
Research Engineers at CDHU have expertise in a variety of technical skills and knowledge sets, such as coding and data science, to statistical analysis, machine learning, and natural language processing.
Read below to learn more about our research engineers and coordinators:
Research Engineer– AI and Image Processing
I am a Research Engineer at the Centre for Digital Humanities where I work as an image analysis and data science expert. I am interested in investigating how research-oriented AI solutions can be adopted in real-world applications in the field of Humanities and Social Science. At CDHU, I would like to help scholars and researchers gain an understanding of AI and data-driven methods, teach relevant courses, share knowledge through creative workshops, and also support their infrastructure needs.
By training, I am a PhD in Computer Vision, and have worked as an AI Scientist at Silo AI Stockholm, and prior to that, as a researcher at the Center for Image Analysis at Uppsala University. I am also working as a Computer Scientist AI/HTR at Folkrörelsearkivet for Uppsala Län on the Labour’s Memory project, which aims at making large scale material from The Swedish Trade Union Confederation-sphere (from the 1880s until today) available and accessible.
My research interests broadly span computer vision, image processing, machine learning and handwritten text recognition, with applications in Digital Humanities and Social Sciences. I like to challenge myself with different types of problems, and also from different domains, and have expertise in deep learning and data science. My most recent work includes large-scale image analysis and machine learning for digital palaeography, using computerised methods to automatically analyse Swedish medieval charters.
Mudassir Imran Mustafa
I am a design researcher with a multidisciplinary background (i.e., Information Systems, Computer Science and Software Engineering). As a design researcher, I have the curiosity and desire to learn new things, see new perspectives and be involved in creating a better world. My research interests include the identification of quality characteristics and formulation of design principles to continuously preserve, improve, and adapt the research infrastructure in an academic research context to allow such infrastructure to be maintained and to evolve more efficiently.
At the Centre for Digital Humanities (CDHU), I would like to help scholars and researchers gain an understanding of how to use the research infrastructure available (e.g., UPPMAX/SNIC) and manage research data. I hope to use my interdisciplinary research expertise, particularly about understanding and designing digital research infrastructures and digital practices, to offer guidance and support for the various interdisciplinary research projects in the Humanities and Social Sciences.
More generally, I am very excited to build a sustainable research infrastructure at the CDHU, I believe this is especially important to the Humanities and Social Sciences.
Research Engineer-AI and Natural Language Processing
I began my studies with a Bachelor’s degree in Classics and Humanities, but following my Master and PhD in Computational Linguistics, I transitioned from a literary background to a more technical profile.
As a natural language processing specialist, I hope to help researchers in their use of tools to process textual data–novels, news articles, historical texts… Through python, bash scripts, and many other tools, corpora will give the best of themselves. For ideas of what I can do and how I can help, you can have a look at my list of projects on my website: www.mariedubremetz.com .
Through my career I have witnessed how bringing technology and programming to humanist profile can foster extraordinary research. Whether one is an anthropologist or philosopher, lawyer or historian… humanists cultivate a meaningful approach that the innovation field needs to benefit from. I envision the center as a competitive, inspiring, but also as an open and welcoming place. Transitioning from literary studies to programming is exciting but not easy, especially if you are alone in this process. I will be there to help.
My background and PhD is in literature, more specifically oriented towards the sociology of literature, a sub-field that has for a long time applied quantitative methods on literary materials. When digitisation started to take off in the 2000s, it opened up completely new possibilities for systematic literary analyses on a larger scale. I understood that I needed to learn some basic programming and statistics to be able to master these digital methods myself, and this road led me to the digital humanities.
As a scholar, I am currently PI in a research project on contemporary bestselling fiction called “Patterns of Popularity”. Among other things, I track audiobook consumption patterns from datasets derived from Storytel, along with a colleague at computational linguistics. I am also the founder and coordinator of the Uppsala Computational Literary Studies Group (UCOL).
At CDHU, I am a research coordinator, which means that I try to help HumSam researchers that need technical and/or methodical support in different ways, mostly by being a link to our engineers. I manage our pilot project support calls, plan workshops and seminars, and am currently in the process of planning a PhD course in “Cultural Analytics”, to be held in the spring 2022. I am also the main organiser of the DHNB 2022 conference that will be hosted at Uppsala University in March next year.
I believe that computational methods will be a standard element in the HumSam research toolbox of the future. My vision for CDHU is that we aid HumSam researchers at UU both by providing technical expertise, and by teaching them to do things themselves (and thereby generate interest to learn further). For me, the latter is especially important as I hope it can empower future generations of HumSam scholars.
My background is in general linguistics, with a more recent focus on English language and use. In my research I have explored statistical methods for analyzing mixed qualitative and quantitative data related to language perception and production, as well as discourse analysis of certain registers such as Legal English, Business English, and teaching English for Specific Purposes. The interdisciplinarity of these topics led me to seek methods of visualizing and combining different forms of data, such as corpus research, network analysis and GIS, and I then became involved in the GIS for Language Studies (LanGIS) group, as well as Digital Humanities here at Uppsala.
My focus at CDHU as communications officer is to connect with and expand our network, both internally among the faculties, centres, departments, and scholars, and externally among international universities, organisations, and research programmes. I would, overall, like for CDHU to communicate a unique profile for academics looking to explore digital methods and techniques, as well as the infrastructure we have available.
My hope is that researchers who have not previously explored digital methods, or who are uncertain what ‘digital humanities’ actually encompasses, reach out to the Centre and expand their digital horizons! The Centre is a fantastic resource that has something for everyone—individual researchers and large international programmes alike—so I look forward to working in Uppsala and across Sweden, as well as with our international partners, to expand the Digital Humanities infrastructures and support that exist for scholars in the Humanities and Social Sciences.
Alexandra Petrulevich, Department of Scandinavian Languages, Uppsala University
Is there a common solution for editing of West Norse and East Norse manuscripts in the digital age? How do digital repositories impact public and scholarly engagement with manuscript material online? Do current mainstream approaches to digitalisation and digital cataloguing necessarily aid researchers to attain innovative and, most importantly, valid and well-grounded results?
These were some of the questions that were discussed at the seminar “Northern Europe 1200-1500 CE: West Norse and East Norse Manuscripts in the Digital Age” dedicated to digitisation and digital editing of Norse manuscripts as well as current research into this material. The seminar organised and hosted by the TextWorlds network at Uppsala University has gathered infrastructure professionals working with electronic editing software, digital cataloguing and repositories, namely MenotaBlitz.html, Manuscripta.se and Alvin, as well as early career researchers, see the list below, that have used these and other similar infrastructures in their finished and ongoing projects. The initial round of infrastructure and current research talks was followed by a prolonged Q&A slot and a panel discussion of future developments in the field of digital philology in Scandinavia as well as major obstacles on the way there.
One of the principal challenges that was addressed relates to current mainstream practices of digitisation and digital cataloguing. It is often the case that outdated analogue materials including catalogues are turned digital without any considerable revisions which in the long run will significantly hamper the development of digital philology. Reproduction of data errors and uncertainties of other kinds undermines the use of “big data” approaches and digital methods as it is extremely difficult to obtain reliable results. Moreover, the agendas and research questions underlying older analogue catalogues are in many cases different from those shaping current research into West and East Norse manuscript material. For instance, nineteenth-century philologists were preoccupied with studies of “high-status” medieval texts and manuscripts such as Codex Regius of the Poetic Edda or Reykjavík, Stofnun Árna Magnússonar í íslenskum fræðum, GKS 2365 4to. Much less attention was paid to “marginalised” works of post-medieval manuscript tradition, which becomes evident in the catalogue practices of the time. This principal premise is challenged today since more and more previously neglected West Norse medieval and post-medieval sagas and other materials gain scholarly attention.
In order to improve the quality of available catalogues and other manuscript datasets as well as to ensure constant growth of born-digital data, it is necessary to build a strong, multisectoral community and implement a collaborative approach to building, improving and sustaining manuscript and cultural heritage infrastructures. Ideally, such a community will be able to inform the funding bodies in charge of investments into digital cultural heritage about the issues outlined above and place adequate cataloguing higher up on their agenda.
Katarzyna Anna Kapitan, H.M. Queen Margrethe II Distinguished Research Fellow at the Vigdís Finnbogadóttir Institute of Foreign Languages at the University of Iceland, the National Museum of Iceland, and the Museum of National History at Frederiksborg Castle in Denmark, Iceland and Denmark
Kapitan, Katarzyna Anna (2021b). “Perspectives on Digital Catalogs and Textual Networks of Old Norse literature.” Manuscript Studies: A Journal of the Schoenberg Institute for Manuscript Studies 6:1 (will appear in Gold Open Access in May 2021).
Benjamin G. Martin, Department of History of Science and Ideas, Uppsala University
Recent years have seen a burst of interest in “global intellectual history.” Practitioners debate the precise meaning of this phrase, but it evidently reflects an ambition to take a more inclusive approach to the history of ideas around the world—beyond the Western European core on which the field has traditionally focused—paying attention in particular to the cross-cultural contacts that are so important to our globalized present.  The last decade or so has also witnessed growing interest in what could be called digital intellectual history. This trend is characterized by the application of tools and methods from the digital humanities to approach questions in the history of science and ideas. Intellectual history’s digital turn is less well established than its global one, but it seems likely to continue and to grow in scale. 
These two trends in intellectual history -— the global and the digital —- seem to have a lot to offer one another. But so far they have developed as it were on parallel tracks: both moving forward but with little contact between them. There are several reasons for this. But one particular challenge to using digital methods as part of a global approach to intellectual history has to do with sources. Locating historical source materials that give us access to a “global” realm is hard enough by itself. Few scholars can handle more than a few languages, and studies of the transnational movement and reception of ideas have often tended “simply to multiply the frame of national history.”  Locating sources that are global and amenable to digital analysis is harder still. Most of the big text repositories that intellectual historians have used -— like Eighteenth-Century Collections Online, used in the historian Peter De Bolla’s recent book The Architecture of Concepts —- are national in character.  Newspapers, another source in which to follow language use and, perhaps, concept development over time, are likewise almost always national. Moreover, the historical sources that get digitized have so far tended to be from wealthy countries in the global north, with their large, well-financed libraries and expensive digitalization facilities. So far, then, an intellectual historian’s effort to go digital would appear to be at odds with the aspiration to go global.
There is, however, one major arena in which, for the last half-century or so, representatives of nearly every country on earth have regularly met and interacted, and about which a growing mass of digitized historical source materials is available: the world of international organizations. As the historian Sandrine Kott has observed, bodies like the United Nations, the World Health Organization, or the International Olympic Committee can be studied not only as actors in their own right, but as “open social spaces through which we can observe exchanges and circulation” —- including exchanges and circulation of ideas.
This is the point of departure for the research project International Ideas at UNESCO. In this study, researchers and systems developers at Uppsala University and at Humlab (the digital humanities center at Umeå University) apply cutting-edge tools of digital text analysis to a selection of texts produced by this international organization. Founded in 1945 to promote “peace in the minds of men,” UNESCO has debated and acted on matters of particular interest to intellectual historians: the organization and dissemination of knowledge, the role of cultural expression in human communities, and the power of communication across national, ideological and cultural boundaries. Since the early 1960s, when the organization was joined by many newly independent post-colonial states, it has handled these matters as a truly global forum. In recent years, the organization has undertaken an ambitious digitalization project, rendering large quantities of its publications and archival documentation available to the public. In fact, many of these texts still require a good bit of curating before they can be analyzed with digital methods; that curating is what we are working on now! As we move forward, our hope is that new methods of digital text analysis, sophisticated enough to chart conceptual relations and development, will offer exciting ways to explore the global discourse captured in these sources.
Can intellectual history’s digital and global turns be brought together in a way that benefits both? At its most ambitious, International Ideas at UNESCO is an effort to find out.
 See for example Samuel Moyn and Andrew Sartori, eds., Global Intellectual History (New York: Columbia University Press, 2013); and the journal Global Intellectual History.  See the discussion of this trend in: Dan Edelstein, “Intellectual History and Digital Humanities”, Modern Intellectual History 13, 1 (2016): 237-246; Mark J. Hill, “Invisible Interpretations: Reflections on the Digital Humanities and Intellectual History”, Global Intellectual History 1, 2 (2016): 130-150; and Jennifer London, “Re-imagining the Cambridge School in the Age of Digital Humanities”, Annual Review of Political Science 19, 1 (2016): 351-373.  Christopher L. Hill, “Conceptual Universalization in the Transnational Nineteenth Century”, in S. Moyn and A. Sartori, eds., Global Intellectual History (New York: Columbia University Press, 2013), 135.  Peter De Bolla, The Architecture of Concepts: The Historical Formation of Human Rights (New York: Fordham University Press, 2013).  https://zeithistorische-forschungen.de/3-2011/4563
Dr. Agiatis Benardou Digital Curation Unit, ATHENA Research Center and Department of Informatics, Athens University of Economics and Business
Despite reservations expressed sporadically about the applications of digital methods and immersive technologies (3D modeling, artificial intelligence, augmented and mixed reality) in enhancing, complementing and augmenting human remains for educational or similar purposes in the context of museum exhibitions, historical sites or even television popular culture, it is evident that immersive experiences considerably attract wider audiences, expand potential stakeholder groups and enhance visitor experience (Ynnerman et al. 2016).
The application of digital methods to human remains concerns researchers and specialized practitioners. The ability of 3D modeling to quickly collect high-quality data from anthropological specimens has had wide-reaching implications, from conservation and restoration, to public engagement, to the production of replicas and increased accessibility of digital data (White, Hirst, and Smith 2018).
The latter accelerated the adaptation, adjustment and eventual transformation of these methods in order for digital human remains to be exhibited, often in interactive environments, to the general public.
Some notable examples of immersive technologies for the display of human remains are the interactive visualization and digital anatomy of the Gabelein Man at the British Museum and the immersive experience of the Grauballe Man at the Moesgaard Museum in Denmark, where visitors can activate animations of votive ceremonies (Asingh and Linnerup 2016).
On a more controversial tone, in 2019 BBC1 aired the ”Jack The Ripper – The case reopened” in which a number of experts tried to shed new light on the modus operandi of the famous serial killer by using new technologies and virtual reality. Through a combination of archival records with autopsy simulations, the end result raised concerns about the privacy of the victims and that of their relatives or descendants (Benardou 2019).
While major cultural organizations such as the British Museum regularly review their policy and regulatory framework (Fletcher, Antoine, and Hill 2014), issues still raised by the application of immersive technologies to the public display of human remains remain unresolved; what are the ethical concerns around these practices and in which cases do these concerns revolve just around the remnants themselves rather than around the sensitive narratives that frequently accompany them? Are digital surrogates the answer in cases of unpublished anthropological material and its exhibition to wider audiences? And what are the restrictions (licensing, curation, reuse) of “human remains as data”? The answers will be provided gradually, as the irreversibly ever-evolving immersive technologies are being applied to an increasing number of human bodies of the distant or recent past.
Asingh, Pauline, and Niels Lynnerup. 2016. “Bog Bodies: The Grauballe Man.” Technè. La Science Au Service de l’histoire de l’art et de La Préservation Des Biens Culturels 44: 84–89. https://doi.org/10.4000/techne.1134.
Ynnerman, Anders, Thomas Rydell, Daniel Antoine, David Hughes, Anders Perrson, and Patric Ljung. 2016. “Interactive Visualization of 3D Scanned Mummies at Public Venues.” Communications of the ACM 59 (12): 72–81. https://doi.org/10.1145/2950040.
The event, which was held in June 2020, was aimed at researchers, students and professionals working with museum and archive collections, digitalization and/or research strategies. The aim was to provide examples and advice on using metadata for research and outreach, inform about standards and practices regarding metadata, and highlight the benefits of heritage institutions collaborating with Academia in enriching collection metadata.
Yannick de Raaff, Groningen Institute of Archaeology, University of Groningen
In this blogpost I would like to illustrate how and why we have applied digital techniques (photogrammetry and Virtual Reality) to solve a specific archaeological problem. Our study concerns the architecture of a Bronze Age (early Mycenaean; ca. 1700-1420 BCE) tomb from the North Cemetery of Ayios Vasileios, Lakonia, Greece. This particular tomb (called Tomb 21) is roughly rectangular in shape (inner dimensions ca. 2.26 x 1.33 m), and was filled to the edge with a large mass of some 200 stones, evidently the remains of a roof (figure 1). However, even after the complete removal of all the stones, careful recording and the excavation of inhumations underneath we were unsure about the original shape and construction of the roof. How would this hodgepodge of stones once have formed a cover, and how had it collapsed? Rebuilding the roof in real life was not possible, since the tomb had been backfilled after the excavation was completed. Therefore, we decided to address this issue by using digital techniques.
With the financial help of several grants we started working together with the Virtual and Augmented Reality experts from the Reality Center of the University of Groningen. A VR-environment was created in Unreal Engine using a surface model of the empty tomb, and 3D models of the collapsed stones were made and added to it. Both the interior of the excavated tomb and the stones were modelled with Structure for Motion (Sfm; also called photogrammetry). The VR-environment offered us three main advantages. Firstly, we could scrutinize the still standing walls of the tomb from any angle, even after completion of the excavation. Secondly, by carefully studying the various photographs and videos of the collapsed stones as they were being excavated, we could approximate their position within the tomb and place them in that position within the VR-environment. That allowed us to better understand the relative position of the various stones within the tomb and vis-à-vis each other, and thus the collapse (figure 2). Lastly, the stones could be restacked interactively in an attempt to recreate/approach the original appearance of the tomb’s roof and explore which types were likely and which were not.
After the modeling, programming and the gathering of parallels of contemporary tombs, it was finally time to put on the VR-goggles, strap on the controllers, and start with our life-size three-dimensional puzzle. Weeks were spent in the virtual tomb, labouring under a virtual Greek sun, grabbing stones with the controllers, moving them around, putting them in place, pressing the ‘save’ button, and trying again. After many attempts, we were able to confidently refute a number of designs – most likely, the tomb was first covered by a series of beams on top of which were placed first the largest slabs (these were found deepest inside the tomb and must therefore have fallen down first), and then the remainder of the stones were placed on top, creating a cairn (figure 3). Judging by the way the stones had fallen down, it seems likely that the beams broke and caused first the slabs and then the rest to tumble inwards. Instead of merely hypothesizing about the tomb’s cover and its collapse, the digital techniques have made it possible for us to test hypotheses in a structured and argued manner. Virtual Reality was in a way used to perform experimental archaeology in a digital environment. (For further information, see the embedded video below.)
The project is included in the DIG IT ALL-exhibition of the University Museum (University of Groningen), and so is the video added below this blogpost. More information on the exhibition can be found on the website: https://www.universiteitsmuseumgroningen.nl. The exhibition is part of the centennial celebrations of the Groningen Institute of Archaeology and includes various other archaeological projects that have included innovative digital techniques. Our project has been presented so far at various conferences (click link for PowerPoint presentation), and a scientific article will appear in the proceedings of the Lakonia conference.
The excavations at the Ayios Vasileios North Cemetery are directed by Sofia Voutsaki, as part of the Ayios Vasileios Project, directed by A. Vasilogamvrou, Director Emerita of Laconia Ephorate, and carried out under the auspices of the Archaeological Society at Athens. Our thanks go out to the Groningen Institute of Archaeology (GIA), the Ammodo Foundation, the Institute of Aegean Prehistory, the Mediterranean Archaeology Trust and the Centre of Digital Humanities. This reconstruction project was a collaboration between archaeologists from the GIA (Yannick de Raaff, Sofia Voutsaki, Theo Verlaan and Gary Nobles) and staff of the Centre for Information Technology, interfaculty V / AR hub (Gert-Jan Verheij, Frans van Hoesel and Pjotr Svetachov.