By Dr. Kerstin B Andersson, Dept. of Linguistics and Philology, Uppsala University / Swedish Council of Higher Education
I’m a media anthropologist and Indologist, currently working on the Indian diaspora and communication. During the last couple of years, I have focused my research on the distinct field of migration and the use of new media and social media, a growing academic field.
The impact and importance of the new technologies for migrants is well established. Appropriation of ICTs and new media environments have become a ubiquitous feature of everyday life in migrant groups. The development of the research field is closely related to the expansion of ICTs and new media. The first studies dealing explicitly with the field of migration and new media appeared in the end of the 1990s. Now, it has become an established academic research field.
Academic research in the field of migration and the use of new media is interdisciplinary, drawing on approaches from a number of subject areas, such as anthropology, migration studies, media and communication studies, studies in new science, Internet studies, sociology, and cultural studies. The research area is understudied, characterized by rapid changes and shifts, and is shaped by the changing structural conditions of migrants and the proliferation of forms of media. For example, the 2015 European refugee crisis led to a number of studies on the impact of new media on forced migration.
In a recent article, I provide a comprehensive overview of the rapidly expanding academic field of migration and the use of new media. So far, the research field has been characterized by an increasing number of empirical case studies on the use of new media in migrant groups. Through a review of the existing literature in the field, I provide an inclusive narrative synthesis of the academic field. The result is presented in the form of a narrative literature review, where I elaborate on the status of the research field, the primary themes and topics of research interest, the theoretical and conceptual issues under investigation, and the methodological approaches to research in this field.
By Amalia Juneström, Department of ALM, Uppsala University
My name is Amalia, and I’m a PhD student from the Department of ALM at Uppsala University. Thanks to a grant from Riksbankens Jubileumsfond – The Swedish Foundation for Humanities and Social Sciences – which allows researchers to attend the annual summer school for the digital humanities at Oxford University, I had the privilege of participating in the week-long summer school in Oxford earlier this summer.
When I left for Oxford, I had already been involved in the planning of our own new international and interdisciplinary digital humanities master’s programme, which will start in our department this autumn. It has been an enjoyable experience, and I was looking forward to participating further. However, although I was well aware of the increasing role played by tools and techniques from the digital humanities field within my own discipline, my relationship to them had so far been tangential. To tell the truth, my own experience of many of the new computer-based techniques used within both my own field and the digital humanities had been one based on a mixture of fascination and trepidation. In short, I felt an urgent need to broaden my understanding of this knowledge domain; the opportunity to participate in a one-week introduction course was therefore much appreciated.
The summer school offered a variety of strands providing insight into different domains of knowledge within the digital landscape. In order to improve my general understanding of the digital humanities, I chose the strand ‘An Introduction to Digital Humanities’. In terms of participant numbers, this strand turned out to be by far the largest within the summer school, and it was well suited for those who, like me, wanted to better acquaint themselves with the tools and methods found within the interdisciplinary field of the digital humanities. Unlike the other workshop-based strands, which offered hands-on practical training in the techniques and tools of each course, the strand that I chose was mainly lecture based, making it well suited to beginners. By drawing on expertise from many different fields, the lectures offered insight into a range of research approaches embraced by the digital humanities.
During the five days of the summer school, we checked out a selection of research scopes such as text mining, digital archiving and musicology. I think that everyone who participated found it useful to go through such a wide variety of topics, digital tools and methodological spheres of application. All in all, I found the selection of themes and topics at the summer school very well organised and rewarding. Also, I am truly convinced that location and setting can have a great impact on the outcomes of learning, and what location could be better for acquiring new knowledge than Oxford, one of the world’s most famous centres of learning? But even if you don’t believe there’s a connection between location and successful learning, the historical setting made the experience highly memorable, and I really appreciated our accommodations in the romantically Victorian red-brick Keble college, whose historical atmosphere was reminiscent of Brideshead Revisited.
By attending the summer school, I definitely acquired a better understanding of some of the research methods and techniques which are important within my own research field and which are of interest to my own academic journey. I would like to thank Riksbankens Jubileumsfond for the opportunity to take part in the summer school, and I encourage everyone who has an interest in the digital humanities to check out the programme for next year’s summer school and apply!
By Ylva Söderfeldt, Department of History of Ideas, Uppsala University
Patient organizations are today available for any illness. They range from small, informal self-help networks to large, well-funded associations. Although they possess quite significant power, as intermediaries between patients and healthcare providers for instance, or as lobbyists in the political sphere, they have been mostly ignored in historical research. We know very little about how these important stakeholders in healthcare emerged and evolved.
An ongoing pilot study at Uppsala University now tries to develop methods that could make a comprehensive history of the patient movement possible. With the generous support of the Kerstin Hejdenberg scholarship from the Swedish Asthma and Allergy Association, the association’s member journal Allergia is currently digitised. Together with Karl Berglund, University Library, and Matts Lindström, Digital Humanities Uppsala, we are currently analyzing the material with text mining tools.
The purpose is to see in what way we can measure changes in vocabulary that are significant to discursive transformations regarding allergy. From previous research, we know that the second half of the twentieth century saw substantial change to the illness concept on both a medical and cultural level, but the question is if this is reflected in the publications of the patient organization – and if so, how can we best define and measure them quantitatively?
In the short term, results from the project are expected to offer important insights about the history of asthma and allergies – some of the most prevalent diseases in our present society – and the role that patients themselves played in defining their illness. But the project also has a longer term goal: by going through all stages from digitization, via pre-processing to analysis, we gather crucial experiences in how to make the move from analog to digital history. We test and evaluate methods and work modes for text mining of a relatively small, inconsistently structured corpus, with research questions that relate to history of knowledge. Without doubt, the experiences we gather will be helpful for other researchers that face similar challenges.
By Karolina Andersdotter, Digital Methods Librarian, Uppsala University Library.
In the end of May 2019 Uppsala University was appointed Cooperating Partner of DARIAH EU. DARIAH stands for Digital Research Infrastructure for the Arts and Humanities and is a pan-european infrastructure for arts and humanities scholars working with computational methods.
DARIAH EU consists of 17 member countries as well as several cooperating partners in eleven non-member countries – including Sweden. Uppsala University is the second Swedish institution to join, following Linneaus University. Together with Linneaus we now aim towards forming a national consortium for infrastructure for a full membership in DARIAH. We are further in conversation with the centres for digital humanities at Umeå, Lund, and Gothenburg Universities.
The initial commitment as cooperating partner is for two years and via the administration of DH Uppsala. Through communication between UU researchers and the DARIAH ERIC Virtual Competence Centres UU aims at knowledge exchange on linked and open collections and data, content management and storage of research data, enhancement of digital scholarly tools, and digital research infrastructures, environments and standards.
Aforementioned points are key issues identified in the current draft of goals and strategies for Uppsala University; this cooperation can help us towards the goals of first class digital research and education infrastructures, open science, and safe and open storage of and access to data.
Knowledge exchange through the VCCs is expected to develop and improve the research support services provided by the university library, thus making an impact for all researchers in need of digital support through the scholarly life cycle.
Bill Kretzschmar is Harry and Jane Willson Professor in Humanities at the University of Georgia and is a visiting professor at the University of Oulu in Finland. He edited the American Linguistic Atlas Project for 34 years, the oldest national research project to survey how people speak differently across the country, which led to his preparation of American pronunciations for the online Oxford English Dictionary. He has been active in corpus linguistics, including work on tobacco industry documents. He has been influential in the development of digital methods for analysis and presentation of language variation, including applications of complexity science.
In May, Bill Kretzschmar visited Digital Humanities Uppsala in collaboration with the language GIS research network to deliver a speech for the DH Uppsala Seminar Series. Kretzschmar addressed the theme of sustainability in an institutional setting and proposed collaboration with the university library as the only realistic option for long-term sustainability – drawing upon his long experience at the University of Georgia (and the Digital Humanities Laboratory, or DigiLab, more specifically). A video recording from the seminar is available here (in progress).
We also used the occasion to conduct a short interview with Bill where he among other things touches upon the technological history of early DH (humanities computing) as experienced from his perspective as well as the matter of sustainability. It is published below.
Could you talk to us about the transformation of The Linguistic Atlas Project from a printed publication to its early digital versions – especially in relation to the material and technological conditions that surrounded this process?
When I first started work on the Linguistic Atlas Project in 1977 (!), as a graduate assistant, the whole project that I could see was on paper. The team of graduate assistants was working on recopying some of the field records, and I was using a typewriter to prepare camera-ready copy of some of the field records for publication in the University of Chicago Press series of fascicles of the Atlas of the Middle and South Atlantic States. There were a few audio recordings on reel-to-reel tape, but we weren’t working with those at the time. Our very first step in making the paper records digital was a grant from the National Endowment for the Humanities to start a database of responses; at that point, in 1983, I made the decision to use PCs instead of the university mainframe as something we could control ourselves and because of the new availability of a Winchester hard drive (all 10Mb its storage). After I took over the Atlas in 1984, we were still working with paper records and my first task was to invent the digital technology necessary to prepare camera-ready copy for the fascicle series, so I learned about type founding and created phonetic fonts that we could see on the computer screen using a special graphics board that let me design phonetic characters in “high ASCII” (codes 128-255) and print them out as dot designs using the newly available Hewlett Packard laser printer. About that time the University of Chicago Press cancelled our print publication contract, so these methods were just used to produce the camera-ready copy for our Middle and South Atlantic Handbook. I could use the new publication system to make camera-ready copy for the Journal of English Linguistics, which I edited at the time and had printed privately until the mid 1990s. Also in the late 1980s, I taught myself how to use the RBase database system because the programmer for our earlier NEH grant failed to get it to work, and designed the database structure for the Atlas that we still use today. We got another NEH grant in the early 1990s to keyboard Atlas data–and found out that it was just too time consuming and expensive to enter massive amounts of phonetic data–but we completed entry of about 15% of the data and that was enough to launch the whole digital process. That digital data allowed all of our new developments with GIS: interactive GIS for that data first on Macs and later the Web, and then applications of technical geography like spatial autocorrelation, density estimation, and Kohonen self-organizing maps.
As is exemplified by your previous answer, the particular needs and conditions for humanities infrastructures will always be in flux – and of course other factors than technology (institutional, political, financial), play an equally important role. What is your impression of current conditions within American and European academia?
When I started working with computers on humanities tasks in the 1980s, some people were using mainframes but we decided to use separate PCs because we could control them ourselves, and not have to wait for our low-priority programs to run in the middle of the night, and because we could get at least some mass storage. But my work with Linguistic Atlas data always pushed the limits of storage, and of processing once we started using statistics. We were noticed by the U of Georgia Computing and networking Service when we tried to run statistics on their mainframe, and used more computing time than they realized we needed. While personal computers will always be our choice for data entry and writing/running small programs, we now have to have larger infrastructure for our large data sets and for Web distribution of our materials. This means cooperation with units in the university that manage bigger infrastructure than we can run in the office. It took a long time for us to realize that our natural partner was not the computing and networking unit at the university, but instead the university library. Many of our colleagues in the sciences need ultra fast processing, and that is what our Georgia computing administration has provided. But what we need in the humanities is the ability to create interactive programs to store and present great masses of information, and the library is the unit of the university whose mission it is to do that. My grant resources have helped the library to create the infrastructure that we need for the Atlas, and the library has gone even further to make such infrastructure available to others in the humanities. In Europe the situation is very uneven. I have heard of impressive humanities computing networks in Germany and Norway. Not so much, yet, in the Nordic countries even though there are lots of great digital humanities projects in Finland, Norway, and Sweden. Support for the digital humanities in England seems to have declined, for example with the demise of its digital humanities institutional organization and the removal of digital humanities from the Oxford Computing and Networking Service to the Bodleian Library. In Eastern and Southern Europe (perhaps with Italy as exception) the situation is much worse. The bottom line is that those of us in the humanities really have to have institutional partners for infrastructure. We cannot sit by ourselves with laptops in our ivory towers.
While cooperation is essential for building digital research infrastructures for the humanities, there is also always the difficult question of sustainability and maintenance. What happens after a project is finished – and what can we do to keep the digital resources sustained?
The end of projects is inevitable. All of our digital humanities developments begin with smart people who dream them up and find a way to implement them. But those smart people are usually not followed by people quite as smart, or at least not as interested in the aging projects as the original inventors. When projects lose their momentum there simply is no current way to keep them alive, even as not-working images instead of interactive programs. I have tried to plan for this on the Linguistic Atlas Project by creating a maintenance-free part of our site, the Data Download Center, a file structure from which users can download all of our data. Our interactive elements in the Web site will eventually go dark when there is nobody to maintain them. Our partner the library cannot afford to pay people to maintain our site. That’s been my biggest job over the decades, not to invent the tools and sites, but to find money consistently to pay for their maintenance and development. While we do not have access to as much grant money as natural and physical scientists do, not to mention medical professionals, there has been enough money for me to keep the office open for decades. But when I retire, none of that money will be coming in. The best we can hope for is that maintenance-free portions of sites will still be available even after the fancier parts that need maintenance have failed. Maybe this is a gloomy prediction, but at the moment I do not have a better plan.