This website uses cookies to ensure you get the best experience. By clicking or navigating the site you agree to allow our collection of information through cookies. More info

Letter from the editors

Open data and data re-use are nothing new to the EuropeanaTech community. Opening up data and building platforms that allow for interoperability, research, annotation, community engagement and creative re-use are cornerstones for Europeana and the EuropeanaTech community. With this issue of EuropeanaTech Insight, we’re happy to highlight several different examples of the extremely positive things that can happen when data and content are opened up.

From IIIF and its numerous add-ons making the sharing of images, metadata and annotations easier than ever for research and memory institutions, to the future of watching and interacting with broadcast television, open data provides seemingly endless opportunities to enrich daily life.

These examples are a call to arms for all data providers out there. Rich and open data is the lifeblood for projects like these.

We hope the following articles inspire you and serve as examples of what is being done in the EuropeanaTech Community. Feel free to leave examples of your own projects below or get in touch with us to share your work.

Historypin and LODLAM: Building A Culture of Open

Jon Voss, Shift

Surveys suggest that associational life and civic engagement are in decline. Fewer and fewer people are participating in formal groups, religions, clubs and other organised social activities. At the same time, research shows that it’s exactly this kind of engagement that increases happiness and improves communities. Our goal at Historypin is to strengthen this local social fabric by bringing people together through the sharing of local heritage. Our aim as a non-profit organisation is to have significant, measurable social impact by building on heritage in a connective, inspiring and open way.

Part of our ‘open’ strategy is open data. Memory institutions around the world are increasingly sharing their content and engaging communities in the environment and spirit of a free and open World Wide Web, enabling projects like ours. In collaborative efforts with institutions and aggregators like Europeana, we’re able to scale the efforts of individuals interested in personal history, and expand that focus to include the history of towns, neighbourhoods and communities. Of course, true open data exchange requires ongoing work and participation in the growing movement of memory institutions participating in an open web. An open web not just of documents, but of data.

Example of a History Pin local project

Part of ‘open’ is open and democratic access to tools. The availability of technical tools puts the power of storytelling into the hands of communities that have been largely absent from the dominant narrative of history. This has opened a two-way street with memory institutions, who are not only sharing their content for enrichment and solving ‘history mysteries’ with the public, but also expanding their collections to include previously missing or excluded voices in their historical collections.

Of course, one of the biggest ‘opens’ in the 21st century is open source, which is why we are looking at the challenges not just of open data exchange but free/open source software. Open access to our data via APIs, exchange of data using open standards, and even building our technology stack on open source stalwarts like apache httpd, mysql, python and django are all part of our approach to making our data and heritage available and remixable by all.

Screenshot showing Historypin’s international reach

As someone who grew up on movies and television shows exploring the wonder and possibility of time travel, I found my own time machine through the discovery of a small box with a journal and photographs from my grandfather’s service in World War One. My grandfather died when I was young and I have very little memory of him, but by mapping and piecing together family photos, his journal and discussions with those who knew him, I’ve been able to, quite literally, walk in my grandfather’s footsteps.

Standing next to my grandfather at L'église de la Madeleine in Paris, 2015/1919.

So many people have that same fascination with what was here before us: they ask who was here, what the world was like, how has it changed? Because it’s such a universal longing, we can use personal history as an incredibly natural way to connect people across generations and cultures. Our technology, at its best, helps bring these personal pasts to life and makes history relevant to new generations.

While it’s not easy, we know we’re not alone, and we’re ready to make time travel happen. Using the open building blocks of our predecessors, we hope that Historypin has a role to play in strengthening communities through the power of local history. With our web platform and a growing network of global projects, we’d like to turn conversations and connections into measurable social good, powered by our cultural heritage. We’ll be coming to you for help, though: if we work openly enough together, we can leverage heritage and memory to strengthen local communities, at the scale of the World Wide Web.

International Image Interoperability Framework

Petr Pridal, Ph.D, Klokan Technologies GmbH

Hidden within the digital repositories of cultural heritage institutions are true treasures, waiting for discovery, interlinking, data extraction and re-use by researchers and digital humanists. There are millions of pages from books, newspapers and manuscripts already scanned, as well as images of maps, art,photos, scrolls and sheet music.

Unfortunately, a digital repository or a website which presents these digital documents is usually created only for a single institution or a group of a few institutions - and the historical cultural content is locked in a single software system - with defined limitations and functionalities for viewing and personalization. Mashing-up and linking with other content is hard or almost impossible. Typically, the presentation platform is developed and supplied by a single software vendor as well - and the digital repositories then create independent islands of locked-in content.

This situation complicates or even prevents the development of new generations of user-friendly online tools and environments that can make digital humanities academic research more user-friendly and straightforward. The new generation of digital natives is waiting for such online tools. Instead of downloading high resolution images to laptops or home computers, there is the expectation and need to work with high resolution images remotely, using new reliable online services from different providers.Just like how we now use collaborative online tools such as Google Docs or Sheets;we will work collaboratively and online with cultural data and historical documents in the near future.

Due to the multitude of systems and platforms in place, tasks such as adding notes to digital objects (annotations), transcribing visible text or citing and linking an image to related information sources are difficult, whether addressing the whole digital document, a chosen page or even a selected cutout of a page or image. It is time for a change:To allow for interoperability between different digital repositories, there is a need for standardisation and documented open protocols for accessing the online images in a common way.

This is the main reason why a group of researchers and experts from the cultural heritage area started to work on the International Image Interoperability Framework (IIIF) and the definition of the open protocols and standards alluded to above. The main mission of the group is to allow re-use and ensure interoperability and accessibility of the world’s image repositories in a transparent and open way.

Participants are from several prominent universities (including Stanford, Cornell, Oxford, Princeton, Yale and Harvard), important cultural heritage organisations and national libraries (The British Library, La Bibliothèque nationale de France, Nasjonalbiblioteket in Norway, De Kongelige Bibliotek in Denmark, National Library of Wales, New Zealand, Poland, Serbia, etc.) and from research institutes and other organisations. The group documents its activities online at IIIF.io and on the meetings and workshops and iiif-discuss mailing-list. New participants are very welcome.

At the core of IIIF exist two different protocols: the Image API and the Presentation API.

IIIF Image API defines how to request an image from a remote server in a required form - small thumbnail, a high-resolution preview, or set of tiled images for zooming with interactive web or mobile viewer. The protocol is straightforward and the result of a request is typically a single image displayable in a web browser.

The Image API request defining a region of image in an arbitrary size

IIIF Presentation API specifies structural and presentation information for the digital content. Typically it provides the minimal metadata (title, copyright, etc) required for presentation - together with sequence information for individual images of a single physical object (pages in a book, back or front side of a map, etc.). The included concept of canvas allows for the combination of images with text annotations, and possibly with other materials like audio.

The whole presentation API is web-developer friendly, made in JSON format (in fact with the Linked Data variant JSON-LD) prepared for AJAX calls.

>Because the standardsare developed in very close cooperation with the cultural institutions, correct crediting and rights management are ensured. The institutions can decide to publish lower resolution copies publicly online while protecting high-resolution copies by watermarking or offering access only after authentication - which is defined as part of of the protocols as well. This way it is possible to control the access to the distribution of the digital content - and also demonstrate on logs and analytics how many users were accessing and using the exposed documents.

As IIIF standard gains popularity in the community, there are more and more open-source components available for direct application and combining.hese include servers (Loris, IIPImage JPEG2000 with IIIF), viewers (OpenSeaDragon, Leaflet, KlokanTech IIIFViewer) and tools for enrichment of the digital content and online research.All of these can be easily combined in any (research) project to deliver new functionality and attractive user interfaces. For example: http://showcase.iiif.io/ and http://iiif.io/apps-demos.html.

As a demonstration of a complete IIIF tool it is definitely worth testing the open-source platform, Mirador. Mirador offers a research environment able to load and combine documents from different servers.

Similarly, an online service called Georeferencer fully supports IIIF and can turn scanned maps from remote servers into Google Maps overlays and map layers in a collaborative way.

More information about the IIIF is available at http://iiif.io/

IIIF in Action

Glen Robson, National Library of Wales

Editors note: This following article also describes IIIF but additionally provides concrete examples of IIIF in action.. We have omitted the description of IIIF. For more detail about IIIF you can re-visit the above article by Petr Pridal.

The emerging Image Interoperability Framework (IIIF) standard is an exciting development for institutions with digital image repositories - large and small - and who are looking to open up their data to wider audiences for research and re-use. This article will highlight the wider organisational benefits of implementing IIIF at the National Library of Wales (NLW) and hopefully it will inspire others to take a look at IIIF.

The IIIF standard defines two APIs which allow access to images and presentation metadata.

The Image API allows you to specify specific areas of an image by noting the region, size, rotation, quality and format of the desired image in an open, public, shareable URL. This functionality allows viewers built upon this API to display ‘zoomable’ images, even from different institutions, and also allows users to share cropped or resized images.

The Presentation API allows images to be ordered into more complex ‘collections’, so that related images can be grouped together for discovery and management. For example, a set of images could be associated together to form a manuscript, book, photo album or an issue of a newspaper. The real power of the Presentation API is that it allows images from different institutions – irrespective of location - to be brought together (virtual unification). For example, if a physical manifestation of a manuscript has been dispersed geographically in various institutions, IIIF can bring these pages together digitally.

IIIF in Action

NLW has implemented both the Presentation and Image API and this has allowed us to collaborate with international institutions on a number of projects that would have been more difficult, or even impossible, without IIIF. Some case studies are highlighted below:

1. Guillaume de Machaut Manuscript

NLW has shared the Guillaume de Machaut Manuscript - hosted in the NLW digital repository - with researchers at Stanford University who intend to transcribe the content and collaboratively investigate the manuscript. Being able to share this manuscript through IIIF provides greater exposure for the manuscript and also allows the researchers to import the manuscript into their own tools for collaborative analysis. These tools include the IIIF viewer Mirador and the manuscript annotation tool TPEN

Screen shot of the Guillaume de Machaut Manuscript

2. Cynefin – The Tithe Maps of Wales

NLW has also been working with the public archives in Wales and Klokan Technologies to develop a crowdsourcing, georeferencing and transcription tool to allow public audiences to enrichmaps dating from the 1800s. The images and metadata are stored in the NLW repository but shared with Klokan using the IIIF standard - negating the logistical need to send images on hard drives and agree a common metadata standard.

Screen shot of Cynefin

3. Welsh Newspapers Online

NLW has a large collection of digitised full-text newspaper images (over 1 million pages) and using the Image API has been able to customise the OpenSeaDragon Viewer as a generic viewer to display the content. NLW previously had to develop a bespoke viewer for each project which has relied on scarce local development resources and involved a large maintenance overhead.

Screen shot of Welsh Newspapers Online

4. Embedding IIIF in the NLW digital architecture

There are also plans at NLW to develop a Digital Library interface to expose data through the Wellcome Trust/ Universal viewer– this is just one of the many viewers available for the IIIF Presentation and Image APIs. This viewer has been developed by Digirati in collaboration with the Wellcome Trust and the British Library and will allow a range of content structures including manuscripts, books, maps, photographs and art to be presented online through a generic and consistent interface. NLW is also a multilingual institution - by law we are required to provide services in both Welsh and English - and the IIIF standard also helps to meet this aim as it allows metadata for the same object to be in multiple languages. The NLW is working with the Digirati to ensure the viewer is capable of displaying a fully bilingual interface.

I hope this article has inspired you to have a look at IIIF and get involved with the IIIF community. There is an active discussion list called iiif discuss.

Elog.io and Image Provenance

Jonas Öberg, Shuttleworth Foundation

What is the relation between a herring and a clupea? The question may seem absurd, but it illustrates an important point. Most people would be able to relate to and understand someone talking about herrings, but would be hard pressed to form the same relation and understanding hearing someone speak of clupea (its Latin genus). For anything we can easily identify with our existing knowledge, forming new relations is easy. For anything which is beyond our pre-existing knowledge, forming new relations is hard and we need the scaffolding offered by relations to something we know before we can make the connection.

In art, this scaffolding is the provenance of a work. In order to form relations with an image we see, we usually need information about that image. Information about who authored it, what other works (if any) it's based on and so on. Having this information, knowing the provenance of a work, helps us appreciate and value it because it helps us form relations and references to it, all of which give it meaning.

'The Work of Art in the Age of Mechanical Reproduction', by Walter Benjamin, first published in 1936, offers thoughts into the difference between a copy and its original in a time when the physical differences between the two were slim. Indeed, for any work which is natively digital, there would be no difference between a copy and its original. Walter Benjamin hypothesized that the difference is that the copy lacks the presence in time and space that makes up the original. In an age in which more works are published and distributed digitally and where provenance is routinely lost as works get distributed online, one can easily imagine that what makes up an original is its provenance. Copies are becoming ever more likely to lack information about provenance, and social media sites are routinely stripping such provenance information from works being shared.

Realising that the provenance of a work is so important for our understanding of culture, I set out in 2013 to explore whether the link between a work and its provenance could be made more persistent, such that it is retained even after being shared online, even after being stripped from its original context and re-used elsewhere. The promise being that if this information could be made available to people as they browse the web, they would still be able to relate to images they see in a different way. They would be able to connect with the authors and institutions holding those images, and it would deepen their understanding of those images.

This work has resulted in Elog.io, a public service that builds on what we've learned over the years to create a global provenance database, a repository for information about digital works distributed online. It offers a way for people to explore works they find online in their proper contexts, and provides licence information and source links back to the origins of a work.

While initially seeded only with information about images which are part of Wikimedia Commons (about 23 million), Elog.io is living proof of the promise that this technology has: the potential to persistently connect an institution and creator with their works regardless of where and how those works are distributed online.

In order to make Elog.io truly useful, to progress beyond a prototype, it needs information about more images and it needs to implement features that support community curation of provenance data. All of this is possible. The technology to do it is available. Elog.io has proven this, and while active development of Elog.io has paused after its funding ran out, there will undoubtedly be additional initiatives that build upon it in the future. The advantages that technology like this could offer are just too big to ignore.

Climaps by Emaps

Tommaso Venturini, Science Po


Climaps.eu is an online atlas providing data, visualizations and commentaries about the debate on climate change adaptation. It contains 33 issue-maps. Each of the maps focuses on one issue raised in the climate change adaptation debate and provides:

  • an interactive visualization;
  • a discussion of the map and the findings that it discloses;
  • a description of the protocol through which the map has been created;
  • the raw and the cleaned data on which the map is based, and the code employed to cleanse them.

Climaps.eu also contains 5 issue-stories providing a guide to reading several maps in combination.

The atlas is aimed at climate experts (negotiators, NGOs and companies concerned by global warming, journalists, etc.) and to citizens willing to engage with the issues raised by climate adaptation.

It employs advanced digital methods to highlight the complexity of the issues related to climate change adaptation and information design to make this complexity legible.

Controversy Mapping and the ‘Sprint’ Workshops

Climaps.eu has been produced by the EU-funded project(EMAPS and is the largest experiment using the method of ‘controversy mapping’ so far.

Controversy mapping is a research technique developed in the field of the Sciences and Technology Studies (STS) to deal with the growing intricacy of socio-technical debates. Instead of cowering from such complexity, it aims to equip engaged citizens with tools to navigate through expert disagreement. Instead of lamenting the fragmentation of society, it aims to facilitate the emergence of more heterogeneous discussion forums (cfr: http://climaps.eu/#/controversy-mapping.

Such objectives are pursued

  • by collaborating with experts from different camps in the debate,
  • by exploiting digital data and computation to follow the weaving of techno-scientific discourses,
  • and by using design and visualization practices to make such complexity readable for a larger public.

Because of the necessity to organize a trans-disciplinary collaboration between controversy mappers, issue-experts, data scientists and designers, EMAPS invented a new research format: the ‘sprint’.

Inspired by open-source hackathons and digital humanities barcamps, sprints are hybrid forums where30-40 people with different backgrounds gather to work intensively for a full week to map a given socio-technical issue. Unlike its antecedents, sprints are extensively prepared in advance (by defining the research questions, collecting and cleaning the data, forming the groups), to make sure the workshops can succeed in delivering usable results in one week’s time (cfr: http://climaps.eu/#/sprints.

Findings and Issue-stories

Adaptation and Mitigation in the UN Convention on Climate Change (UNFCCC)

Analysing the Earth Negotiation Bulletin, we identified the main discussions in the UN Convention on Climate Change, traced their visibility over time and the countries engaged with them.

Discussions on climate change adaptation and mitigation take place in areas in the UNFCCC. Mitigation constitutes the main object of the convention, and it is present everywhere in its conversation and hence structures the articulation of the debate. Adaptation, on the contrary, appears as a group of specific discussions, and has a limited, but central, place in the negotiations.

Although adaptation is present from the beginning in UN conferences (in particular the question of its funding), an ‘adaptation turn’ is visible from 2004 with the rise of the questions of vulnerability and of climate change impacts.

Mitigation and adaptation in the UNFCCC debates

The 'Place' of Adaptation

The Geopolitics of Adaptation Expenditure

Using RioMarkers coding, we extracted from the OECDOfficial Development Assistance the bilateral adaptation funding and visualized it in a way that allows us to compare how the distribution of aid varies between these countries.

We compared not only the amounts committed by donor countries, but also their preferred policy areas, the concentration of their aid, their favoured recipient countries and closest UNFCCC recipient groupings, and the distribution of the aid according to the development level of the recipient country.

Some donor countries appear to specialize in particular policy areas: for example, Japan is best at funding disaster reduction; France at water management; Spain at government and civil society; UK at biodiversity, and Germany at agriculture. Some countries concentrate their aid more among policy areas and recipient countries and the EU than others (Spain, Italy, Ireland), which could suggest a more planned approach to adaptation aid.

Read more:The geo politics of adaptation expenditure

"

Concentration: How many areas and countries to the donor countries fund?

Who Deserves to be Funded

We have compared the priorities of bilateral and multilateral adaptation funders with different ways of assessing vulnerability. Using Germanwatch, DARA and Gain vulnerability indices, as well as the Human Development Index, we explored possible correlations between the amount of money allocated to a country and the degree to which it could be said to be climate vulnerable. We found both positive and negative correlations, indicating that some funds and some countries prioritize in close alignment with the ways in which some indices assess vulnerability, while others do not. In general, development-oriented indices correlate more with climate adaptation funding, providing evidence that adaptation and development are closely connected policy issuesd.

We have also tried to find out where vulnerability indices are mentioned in climate-related contexts. In general we found that climate-specific vulnerability indices are rarely used by actors in the UNFCCC process, but widely cited in the news media.

Read more:Who deserves to be funded?

Who is vulnerable according to who?

Reading the State of Climate Change from Digital Media

Using a variety of digital methods, we monitored the state of online discussion about climate change. In particular, we investigated how users share ideas (Twitter), search for information (Google) and buy books related to climate issues (Amazon).

On Twitter, adaptation is more visible than mitigation, with human and animal victims capturing users’ attention and NGOs most effectively using the platform for their messages. Querying Google for ‘climate change’ OR ‘global warming’ adaptation-related results are more abundant (than mitigation or skepticism) and more visible in institutional sources. NGOs websites put food, water and extreme weather events at the top of their agendas. Looking at Amazon, different ‘selling points’ of the climate change debate are noted. New terminologies appear to brand the climate conflicts (i.e. ‘cold wars’ for conflicts over the melting Arctic), while skepticism appears to be overtaken, as best-selling topic.

Reading the state of climate change from digital media

Linking related art objects with a cultural heritage TV programme using the Europeana API

Lyndon Nixon and Lotte Baltussen, MODUL University, Vienna; Netherlands Institute for Sound and Vision

Households have more and more connected devices and consumers are increasingly using devices simultaneously: this is clearest when it comes to viewing audiovisual programming on one device, and browsing online content on another. As a global trend, ‘using a second screen while watching TV is the new normal’ [1].

The EU project LinkedTVbelieves future television must embrace these new consumption patterns as it moves online, if TV is to retain relevance in a space increasingly dominated by web-centric offers. To this end, the project worked on synchronized delivery of web content related to objects and topics present in a television programme, which we call ‘Linked Television’.

LinkedCulture is one of LinkedTV’s scenarios. The scenario enhances the Dutch public broadcaster AVROTROS’s program ‘Tussen Kunst en Kitsch’ (similar to the BBC’s ‘Antiques Roadshow’). In each show, various art objects are brought in by people, then discussed and appraised by art experts. For each art object, links are provided to similar objects from Europeana, displayed and browseable on a second screen such as a laptop or tablet alongside the TV viewing.

Technical developments of the LinkedTV project which make LinkedCulture possible are:

  • A platform for ingesting the TV programme video, segmenting it into distinct chapters - one for each art object, annotating each chapter with a description of the object, and generating links to related web content;
  • An editor tool for human editors to view and check the segmentation, annotation and linking results in a web browser and confirm the final enrichment of the programme;
  • A Europeana API wrapper able to map concepts present in the art object descriptions to API queries, which extract related art objects along the different object facets such as type, material, creator, creation location, or creation date;
  • A web-based application which synchronises the display of the related content with the TV programme - optionally on the screen of another device. The LinkedCulture application focuses on allowing viewers easy access to details on the art objects in the show as well as browsing similar objects from Europeana.

Besides the generic LinkedTV toolset, the Europeana API wrapper provides the specific functionality of enriching the TV programme with the related art objects. To achieve this, we describe the art objects in the Tussen Kunst en Kitsch episode chapters in terms of the same characteristics, commonly expressed in the Europeana Data Model for cultural heritage objects: object type, creator, creation location, time period and material. Since these properties can occur in the Europeana metadata in different fields, we tested different approaches to determine the most effective query. Since we query largely textual metadata, it was immediately clear that we needed to consider two key aspects in a query:

  • Synonyms and similar terms: Metadata property values are not formally typed in Europeana to a taxonomy that include generalizations or specializations in the results. Thus our query must expand its search terms to relevant synonyms and similar terms.
  • Language: Metadata is generally textual and in the language of the providing organization. So queries need to express terms in the local language.

The wrapper handles these two aspects by generating a set of ‘relatedness’ queries with respect to an input (the art object’s description). It makes use of publicly available knowledge graphs on the web to parse description values, and to expand them or translate terms into equivalent forms. By splitting queries along how closely related the response objects would be to the input object, responses can be ranked - enabling a decision on how many links to offer - or organized - e.g. visualizing links differently in the end application, based on how each object relates to the one in the TV programme segment.

LinkedCulture is thus a step towards eased entry for the public into Europeana’s rich and deep collection of digital objects, tied to the trending activity of ‘second screen’ usage with television watching habits.

More details on LinkedTV technologies - the platform, editor tool and web-based application - for LinkedCulture and other applications can be found at http://showcase.linkedtv.eu. For more information about the Europeana API wrapper, contact lyndon.nixon@modul.ac.at.

References

[1] http://www.forbes.com/sites/jeffbercovici/2014/07/10/using-a-second-screen-while-watching-tv-is-now-the-norm/

[2] UI design in courtesy Lilia Perez Romero (CWI), cf. LinkedTV D3.5 “Requirements Document LinkedTV User Interfaces (v2)” httpRequirements document for linkedtv user interfaces.

Acknowledgments

The author(s) wish to acknowledge the hard work of the entire project consortium. We are indebted to AVROTROS for their permission to use TKK video. LinkedTV has only been feasible thanks to an EU research grant (FP7-287911).

Contributing authors

Jonas Öberg is the founder of Elog.io, a technology used to automatically attribute photographs found online, and Commons Machinery, a company with the mission of oiling the machinery of the Commons by developing the technical and social tools to enable a healthy Commons. He was a Shuttleworth Foundation Fellow where he worked on enabling persistent links between digital work and their creators. Prior to his work with the Shuttleworth Foundation, he was the Regional Coordinator for Creative Commons in Europe, a region stretching from Kazakhstan to Iceland. He’s worked as a lecturer in Software Engineering at the University of Gothenburg and co-founded the Free Software Foundation Europe and serves as Executive Director.

Dr. Lyndon Nixon has been Assistant Professor in the New Media Technology group at the MODUL University, Vienna since 1 June 2014. Previously, since October 2013, he was working in the group as Senior Researcher.He is responsible for the EU projects (LinkedTV– as Scientific Coordinator – and (MediaMixer– as Project Coordinator. He also teaches (Information Systems, Audiovisual Web, Media Asset Management and Re-use) and works on acquiring new research projects.His research domain is semantic technology and multimedia, with a focus on automated media interlinking and the creation of interactive media experiences (hypermedia).

Lotte Belice Baltussen is a project manager at the Research & Development department at the Netherlands Institute for Sound and Vision. She is involved in a myriad of national and international collaborative cultural heritage projects in which innovation is the key component. From Linked Open Data projects to analysis of crowdsourced metadata, and from linking cultural datasets to executing user studies on innovative search interfaces. Lotte’s academic background is Film Studies and the specialised master’s programme Preservation and Presentation of the Moving Image.

Glen Robson is the head of the Systems Unit at the National Library of Wales. He started in IT development by implementing the Fedora Repository in the National Library in 2005 and developed the first FedoraSWORD connector. His background is in IT development but more recently he has moved over to the collections side of the library to help architect, manage and implement the library’s core systems, including the catalogue and their large Fedora Repository. He has represented the library in a technical role in a number of European projects, including Europeana Travel, Europeana Cloud, and in Europeana Research, where he was part of the team looking at applying EDM in libraries. Internationally, Glen is actively involved with the Fedora Leadership Group and the IIIF working groups.

Petr Pridal is a consultant, programmer and entrepreneur. As the founder of Klokan Technologies GmbH , he leads the Swiss SME company specialized in online map publishing, raster processing, geographic information retrieval and applications of open-source software for the culture heritage sector. Within the last few years, Petr participated in several international research projects, and developed, with his team, many popular software applications (OldMapsOnline.org, MapTiler, Georeferencer crowdsourcing web service, IIPImage with Jpeg 2000, etc.) used by institutions and researchers all over the world.

Tommaso Venturini is associate professor and research coordinator at the Sciences Po médialab. He is the leading scientist of the projects EMAPS and MEDEA and his research activities focus on digital methods, Controversy Mapping and Social Modernization. He teaches Controversy Mapping (http://controverses.sciences-po.fr/archiveindex/), Digital Methods, Data Journalism and STS at graduate and undergraduate level. He has been trained in sociology and media studies at the University of Bologna, completed a PhD in Society of Information at the University of Milano Bicocca and a post-doc in Sociology of Modernity at the Department of Philosophy and Communication of the University of Bologna. He has been visiting student at UCLA and visiting researcher at the CETCOPRA of Paris 1 Pantheon Sorbonne.

Jon Voss is the Strategic Partnerships Director at Shift, a global not-for-profit that designs products for social change. He’s helping to build an open ecosystem of historical data across libraries, archives, and museums worldwide through his work with Historypin.org and as the co-founder of the International Linked Open Data in Libraries, Archives & Museum Summit. He leads Historypin’s US projects, creating innovative ways to help people build community around local history.

top