Main

IIIF Annual Conference 2022, Day 01 Tuesday June 07, session blocks 3 and 4

0:00 Using IIIF to teach Digital Humanities: advancing digital literacy in higher education, Davy Verbeke, Ghent Centre for Digital Humanities (Ghent University), Belgium; Faculty of Arts and Philosophy (Ghent University), Belgium; Lise Foket, Ghent Centre for Digital Humanities (Ghent University), Belgium; Faculty of Arts and Philosophy (Ghent University), Belgium; Eef Rombaut, Faculty of Arts and Philosophy (Ghent University), Belgium; Antwerp School of Education (University of Antwerp); Frederic Lamsens, Ghent Centre for Digital Humanities (Ghent University), Belgium; Christophe Verbruggen, Ghent Centre for Digital Humanities (Ghent University), Belgium; Faculty of Arts and Philosophy (Ghent University), Belgium 31:35 From text to image: linking TEI-XML and IIIF via expert sourced annotations and IIIF Change Discovery, Matthew McGrattan, Digirati, United Kingdom 59:04 USE ME: progressive integration of IIIF with new software services at the Getty, Stefano Cossu, J. P. Getty Trust, United States of America; David Newbury, J. P. Getty Trust, United States of America 1:30:58 Exhibit: easy-to-use tools for promoting engagement and learning with your online digital collections, Edward Silverton, Mnemoscene, United Kingdom 2:00:18 Using IIIF Images in Visual Essays, Ron Snyder, ITHAKA, United States of America 2:32:19 From Manuscript to Transcription and Back Again: Closing the Virtuous Circle with Houghton MS Lat. 5, Laura Morreale, Multiple Affiliations; Sean Gilsdorf, Harvard University

IIIF

1 year ago

uh Hello good afternoon everybody so First let me introduce myself, my name is Lisa forgets I am a research collaborator. At can Center for digital humanities i'm really honored to be here it's my first triple A of conference first time in America and first in person conference, giving a presentation so yeah a lot of things. Thank you, thank you very much, so yeah today I will share some of our experiences with using triple I have to beat. Digital humanities, I will give some practical examples
and approaches of how they can Center of digital humanities straight to do that. So, first let me say something about can Center for digital humanities, or can ch which is yeah We obviously need a cool abbreviation for it so. We have for focus areas and the lessons right there and we do support the H teaching practices at the Faculty of arts in. philosophy at Kent University in Belgium, so this presentation will mostly focus on our first focus area digital heritage. And virtual exhibitions and,
specifically, how we try to integrate that into the teaching practices. So let me start by this image, which is what I actually encountered student do in a classroom you may perhaps seed, but it's a cropped up image. screenshot of a triple I fewer face it's in words, so the students actually try to analyze phones and then it was the pumps were like hidden and. Large periodical and they only wanted to see the poems and research it, but because they did not know what to play. The game yeah basical
ly the inverse, which is obviously something we want to avoid soon. So that's kind of the premise digital competences knowledge about supply of are still not really integrated well into teaching practices at least before against it, he tried to implement that it was not really integrating in the teaching practices. So yeah that's what we've been trying to do actually integrate digital humanities within humanity scholarship not seeing digital humanities have like a separate sub discipline. So the
the courses that we thought they were all humanities oriented and. we've tried to gradually like implement digital techniques within those sort of more classic traditional courses so that's kind of the basis of here what what I will report them so. yeah oh no Indeed so, then we come to how to teach digital humanities so oftentimes we get stuck on the term what is digital humanities, there are a lot of definitions and. yeah, I would like to reflect on the quotes provided on the right, I would I
want to suggest that undergraduate students do not really care about digital humanities. They do not care about the whole reflection stage, sometimes they just want to do a simple task, a simple digital task for which you need certain skill and that's kind of the approach that we have taken on, so do not just talk about the ah. yeah kind of have a have a debate about it but don't mirror and talk about it actually have a hands on approach engage with tools and methods and learn something by actua
lly doing it. So the question arises, what should be done teach one teaching niche. and actually my colleague David baker defines 15 trans disciplinary disciplinary digital competencies, which you can see if if you click on the link. In the drive. But they mostly can summarize, some of them here on the slide it's mostly using digital methods and have to to this discovery historical sources to capture digital images to collect material to collaborate. So yeah Those are some examples of of what di
gital competence competence is we try to integrate. So what is, what are the benefits of trouble if or what role does AAA of have into this, so we all know, the use of the usual suspects AAA as a service provides standardized access to digital objects. site and share digital objects displays storytelling possibilities and annotations but also it's kind of stitches together the technology for searchers you can use it in multiple to engage with multiple tools and that's why it's very beneficial fo
r the teachers and. yeah we we as we set, as I said, the digital competence is where our main goal we wanted to increase digital competencies. Teaching triple F was not necessarily an end goal of us, but we did end up using triple if we're going looking back, we did end up using triple I have quite a lot during the teaching practices so. that's kind of what I will focus on now so that's kind of the premise, how can triple A of support each teaching and how to do it so. For this I will reflect on
two scenarios. One creating virtual exhibitions with triple A of an Omega S, and the second one is enriching triple if collections which metric so and. I will first go to each of those two scenarios, on which we implemented the first one in many more classes, then the second one, but I will first go through each of those scenarios and and reflect on. The students perspective, how the students what's to the start of the assignments dates we actually achieve those digital competence, since, and w
hat are our lessons learned, what can we do better so that's kind of the structure. So the first scenario was creating virtual exhibitions with triple if an Omega S, and I will. focus on a case where we integrated Omega within the teacher education of the educational master of cultural sciences at can university, so our main goal was to increase digital competences. for humanity students but also for the next generation, and when you implement it into a teacher education case case you actually h
ope that. The teachers of the future will also implement those techniques within the classrooms when in history classrooms of the the high school. So Omega So those are enabling platform it's already been mentioned yesterday at the showcase as well, so we actually used Omega s instead of Omega classic. Any is an open source platform, we all probably have heard of it or no it's it's used to create period and share digital collections and create virtual exhibitions it's used by the glam sector wor
ldwide. And it's sort of a content management by publication platform for digital objects, so the triple A server module in Omega as enables triple F supports. And plugins for a universe of newer mirador were developed that were developed by then you're better off, they are also were integrated within the Omega platform. And an Omega is already used worldwide for teaching practices which made it a great case for us to try to experiment experience with that platform, so what was our workflow firs
t students had to choose a team. On assignments, which they want to work on, so they actually had to choose like. An interesting team, and that was somehow related to Belgian tradition or a Belgian place so it really could be anything. And then he had to find digital items in relation to that so filing digital items online finding images was actually the first step. And the next step, which is where this starts with to add digital items which were often images to Omega s and add some used to Ome
ga s platform to add Meta data to add tags and to add mapping and the triple F server actually. enable enables that uses those images and to generate a triple if. On your market server after students have done that so then students could use the the manifest URL. To continue working on those images, for example in stories or in exhibits so after that students create web pages and they use stories and or exhibits to create a moving sort of narrative. And they were able to share their collection,
so let me showcase what one of those results look like after adding those items building those web page pages via the content blocks of America. This is a good case. To show you here. So yeah here. They useful text and images to build a webpage and stories with integrated within Omega s. To an airline frame which also makes us makes an easy platform to do that, so they were actually able to combine like the the benefits of those triple if stories. And yeah the benefits of Omega s and and the pre
sentation platform was via via MIA mirador with the metadata that was added by the students and the image. That was found and often also a link to the original institution oftentimes there was not triple if available to directly ingesting to Omega s so. Otherwise, if trouble if was available to students directly injustice, the trip life image with an Omega but we found that was often not the case or students had a hard time finding the triple A of manifest URL. And so that was kind of the first
scenario that we implemented. So the second scenario was is enriching digital collections which meadow, and this was in collaboration with associate Professor associate Professor Marianne founder Moscow. So this reflects back to the image that I showed you in the beginning of poetry in feminist periodicals. So the Meta platform. that's was our main enabler for for this classroom it it's developed by maturity and. they're here. and funded by the recipes of guns and Brussels, and also the national
lab refills maybe some more institutions mature. But it's completely open source platform allows you to import and we organize triple if manifests and our collections. and its main intention is to participate actively enrich digital objects with Meta data annotations which could be transcriptions translations keywords and commentaries. Are within the backgrounds of Meta you can configure a customized capture model to kind of identify okay what information do I want and what do I need so within
this specific teaching case there was actually a whole collection of feminist periodicals so 19th century feminist periodicals. and occasionally on a page you could find like a bone within within those periodicals but it's kind of hard if you're only interested in poetry and. Like analyze that that poetry to kind of know where to look so you first actually have to identify all those poems before you even know what you want to analyze why this was important or whatnot So the first step. which we
did with the students was imports triple if manifests which were all periodicals, which are all different volumes we import them into metric. So all the students were also actually at a site administrators of the metal platform, so that he has access to both back end and the front ends and. And yeah to kind of which was needed to import the images and kind of collaborate on how we will analyze 14. Through medoc so the second step after importing them all was we created a project in maddock and.
We use the capture model to identify the poem in a periodical shoe a bounding box and annotate the title and the author of the phone. So this is what it looks like in the back ends the caption model, and this is what it looked like in the front ends with which the students did stay annotated. The bone and identify the title and then by the author they had the option to add another author, which was sometimes needed because firms could have multiple authors. yeah that was kind of the first main a
ssignments so after identifying all those phones the students were able to browse through those annotated poems browse through the collections and actually kind of see Okay, we have all this poetry, now we know where to look for. And they after the next step was to choose three to five poems for an in depth analysis and that was kind of a harder step. How would we use magic to actually analyze those phones, it was really hard, because all the students wanted to do different things with the poems
wants to have another. date on the analysis, it was kind of hard to find one consistent data model or capture model to actually analyze those poems. So the solution to that was that each student created their own project with a meadow meadow and. needed to think of a custom capture model to analyze the key to five poems they are chosen, and it was only one capture model for three to five poems instead of one consistent model for 70 pounds or something. and not a solution was that we use the htm
l content blocks in metal for general reflections and to sort of create a virtual exhibition using a matter of platform, so let me showcase what the result of that was so this is an actual site i'm not sure it's public. I don't think so. Let me DEMO it so this was the front page, they can came up with you some content blocks to add html to it, these are the different project of each of the students and when you click on one project, you see that if added more concrete blocks to analyze these poe
ms and. Then you can see that the the tree or periodicals which they chose which all contain the bombs, you can click on them and. Then you can actually see that the analysis which data it's So you see the annotated bomb, and on the left hand side is sort of the capture model that he needs. In order to analyze the poem so that was kind of the whole assignments. That we yeah that we did within matter. And yeah that was only five students, so we don't have actually any data or service on how they
found it but. Most of them said that he founded great that it was a practical assignment that he got to know the platform. and actually also helped can kind of contribute to matter because they've they were able to give their own experience on what works well, what did not work well what should be focused on and what not and we actually use their inputs for further developments in metric in the next month. So yeah that's sort of the That was the end results. And finally lets me reflect on can we
teach on the question can we teach digital humanities using triple if it will mostly be reflections on the first scenario just because we've implemented to make a case a lot more, and we have a lot more data, and we have undertaken some service. To those students, but I believe that those reflections are also. Quite. records. Are yeah are can also be used for the medical projects, I believe it's not so different. So what are the experiences so you had a lot of students who actually found it a v
ery positive assignments so. I found it very positive that the assignment provides combines both research competencies and digital competencies. The teachers and students, the opportunity to get creative and there was a sense of practice so we did not say they actually often like that we integrated both digital and the research competencies and did not see them as separate. Other set of Indeed it was that more students would benefit from an exam assignment like this and. A recurring thing was it
's more than just writing a paper in words, as we have to do so many times before it's. To actually also think about Okay, what if we're presenting this paper to wider public use html sort of basic things use images combine images and text sort of. The basic things, and it was also a clear remainder of the importance of digital digital skills needed as teachers and yeah that's also. Indeed, reflecting to one of the pitfalls are some students struggled with the excitement was mostly because they
experienced technology or something overwhelming. And they found that using the platform those digital competencies and this research competitions were actually sometimes quite a lot of information in one time. and other students said i'm not good at digital and I don't like doing it and it's really. that's really hard to motivate students whoo yeah we don't like it, but those Those are some of the pitfalls which we experienced. So the next question is did we actually achieve some digital compet
encies, which we had pre determined and we've kind of okay this sign in sheets delivered those competencies and this and that which kind of made in another in analysis before but we asked the students to a survey. There were 32 participants to indicate by snyder which competencies were achieved. And zero was not achieve 50 was sufficiently and 100% largely and we see that the capture capturing digital items, creating a digital version collecting digital items making collection enriching those it
ems visualizing and presenting digital. material in a digital way or using digital methods and collaborating were the most. Are the five most. digital competence, since which they indicated that he had achieved it to a lesser extent. Those five competencies came up so around between the nine probably was using digital. platforms to discover sources source criticism and the digital term. Using modeling techniques to make relations between digital objects cleaning up data and using digital platfor
ms to actually share collections with other people to share. was mostly because, due to copyright restrictions we cannot actually share their work with a wider public, which was actually yeah. Which is one of the. Continuous like pitfalls of those projects is that you have to deal with copyright so yeah most of those virtual exhibitions remains on your passwords. And not visible to the public, so to a lesser extent, see that's not sufficiently build confidence we're. I would say, maybe not reall
y achieved, it was also use digital methods to automate processes to analyze sources or yeah because it was not really a data analysis more of presenting task. to store and then to to Meta reflection. consequences were the triplex in reflection and society and ethics and how the digital turn fits into society and yeah so they indicated that as. achieved by a lesser extent, or. So. What should we look out for. when trying to teach digital humanities and trying to use AAA F so it's actually my col
league David Baker. who made a SWOT analysis, specifically for teaching with Omega S, but I think it could also be applied to teaching with with triple if you're teaching with triple i've enabled platform, such as magic is that the strengths are that it's multifunctional open source. I put the I put in bolt what I believe what I think the benefits of AAA of our instead of. what's not in bold is the benefits, mostly of sites such as network or or as America. So multifunctional Open Source user fr
iendly you can add Meta data it's modular you can add embedded contents and compliance open data, some of the weaknesses, specifically for a market is that a lot of modules are still in development. and design is not always so easy to get right into Omega, especially if you don't have any experience, what are the opportunities is that you can. yeah introduce students to digital literacy use blended learning techniques, you can get an acquaintance with techniques such as digital storytelling. And
there's also clear connection to the job market, so a lot of the students who actually will wind up or or want to work in the cultural heritage sector. will probably use Omega our triple if widely used by that sector so think connection to a job market is really big opportunity that was stressed also during those assignments. and other opportunities are you can either use it integrally or partially, you can do it to replace and assignments are also as a sort of an additional way to. An a if it'
s like to do an additional assignments. So you have multiple ways of integrating it within within your course and there's also. The benefit of peer learning so in those platforms, such as our make our medical students could actually look at each other's work and learn from each other, they. There was there were no restrictions as an all you have your own page and you can't look at anyone else's know. We actually encourage encourage students to work together, learn from each other, see the pages
of other people, and yet it's also was also great opportunity, so the recurring transfer the digit phobia just yeah I don't like digital I don't like doing it was often often recurring thing. And appendicitis which words the words that's my colleague David baker sets, which means, seeing it as sort of a. can get worse rights. yeah like unnecessary assignments, without having any actual benefits or it's like oh it's necessarily have to do this for school for the uni. Whatever connection to the jo
b perfect kids whatever and yeah what's some of the other threats were unclear if ration criteria, providing no feedback. And also really important one is no in person training sessions it's not because of our digital platforms, that you can just use the digital platforms and not have training sessions we actually found that students need those in person training sessions. In order for the assignment seats and also yeah copyrights often an issue so that was there through kind of our findings and
. Thank you for listening and those are my contacts and numbers and the contact number of my colleagues for yeah without him I would not have been here so really grateful for all of them. Yes, thank you for listening. yeah. hey just just a small one, I think, did you phobia definitely like that's one of the biggest things with with the students and mean, certainly, for you know younger people to have to. learn something so complicated and so new, it is usually painful process. Do you think for t
hese history students that might have been helped if you couple them with like computer science students or just as a teacher in general, do you think that would help with it at all. it's actually an idea that I have never thought about that, I think I think it's good, be it I think it's fair it's it's could be very helpful, although I don't know I actually don't know, but neither is my university. Most of time when it comes through it's either a computer science students trying to do something
and then a group history students trying to do something and it's just. it's hard in the university to get the university to say well let's just cross that and just let those let it be introducing I actually think it would be very beneficial. Because even in the group assignments, when one person was really good at a macro, for example, we often found that already completed a whole digital assignment and the peers, who were not really involved. They just focused on the contents, so I think in th
ose group assignments, you also divide like the digital tasks and sort of the more humanities related tasks, so I think that that's could be helpful to put those two groups together and interesting yeah. One more question. It doesn't have to be completely addressed here, but I think it's interesting how much of this stuff where you talk about with students is also useful for. educating people who maybe are very well established in traditional research but have not done digital research and. ther
e's definitely changes, you have to make when you're trying to deal with somebody who is you know, has already published a monograph and you're trying to convince them that this might be something else that they can add. But I think that the digit phobia comes in the same way, but sometimes the attitudes can be very different, so I don't know if you've had any experience or, if you have any ideas about how you might be able to change that from sort of students to. Researchers yeah, and so we do
offer some of the training, also to teachers and to peers and obviously we've tried to integrate them into the humanities teaching practices by talking to professor's and. By sort of providing us also Oh, we could implement this within your course so that's kind of the way we went about it. But yeah it's not it's not yet widely spread as well, so. I think a lot of the of the SWOT analysis is also applicable to researchers and kind of have to go in a similar way, like stress those opportunities i
n sort of to counter those threats and. Provide with support provides user friendly manuals so like the SWOT analysis for teaching with Omega as is actually intended for teachers and attendance for researchers are professors. to sort of have a user friendly overview in Dutch about the benefits of Omega Evan also supply of that so yeah. it's great Thank you Lisa yeah. Quick request before we bring the next folks up, but if you're in the last block, we need your presentations in the Google drive f
older so. To put you on the spot RON snyder Laura Morales shawn gills dorf. We yeah get it in the Google drive and if you don't have that link slack one of us and we'll we'll work it out Okay, but up next we have matt gratin talking about linking ti xml and triple F. Such as ensure go back to you, oh i'm sorry. You know. The classes, so I can even see what i'm doing. So yeah they i'm talking about the way one particular approach to linking together text in the form of ti xml transcripts. With tr
iple if using kind of crowd sourcing techniques, but with this experts that are doing this and using triple A of change discovery which is one of the newer. AAA of specifications as a way of linking together on connected systems so kind of doing loose coupling between. An environment that people might use for discovering content and one that people might use for anything on creating content so using that specification, to enable that kind of working together. So i'm going to talk about the role
to play and the project going to show a discovery interface that we built for this project that would enable that kind of comparison that the. Project stakeholders want it to be able to do i'll talk a bit about ti xml and how that played into the project about maddock. mentioned already, but which we use as the kind of group for the social element of this project and then how changes company brings those two things together. So the core use case the project that the people who came to us to do t
his work originally is there they're interested in a particular Zoroastrian just text. It has lots of witnesses and many different libraries all over the world. Some of these are really big libraries, like the British Library and boggling and and so on some really tiny the temples and North Indian places like that that don't have any. It infrastructure, what they wanted to be able to do was to bring together transcripts of these particular manuscripts and they're all different, so they have diff
erent. texts with digitized images on one kind of environment where people could then explore and compare multiple witnesses to the same text. What i'm talking about today, or is not this final prototype for this project that projects currently in kind of alpha testing phase with them a little go live. Later in the year, instead, what i'm doing is just looking at the the approach to the workflows the way that we used to apply if the processes that will use to realize that prototype. And I put to
gether a DEMO for this this presentation, using a completely different set of content, not there, the original content for the project. So we've got two points and information and project we've got transcripts and digitize manuscript images, what do we do to bring these two things together, in the middle. The thing about the transcripts where is we couldn't create these alongside of supply if there wasn't going to be some workflow in which people would be transcribing triple if based resources.
The trust is matina xml they already existed in some cases, never created years ago because it's been a long standing project transcribe this content. So this is a kind of fixed point we had this ti xml we couldn't change the format or structure of that data. The ti xml was well understood by the domain experts, they were used to working with this format they understand how to use it. So we were going to be using ti xml whatever right so with that was fixed point there's the transcripts of these
monsters, what do we do with bringing them together with the supplier. On the other hand, for the images there was already some supply after existed for some of these manuscripts at the British Library of the government and the key be in Copenhagen and some other places. But there was also the opportunity to create new to apply if on digitized images were about to apply if didn't exist so typically I was like the obvious former a choice. For this type of resource i'm talking about conference, o
f course, it was but but, but there was lots of good reasons for us to do that anyway right. So the question then is what should we build right, what do we put in the middle between these two sets of existing resources he xml translated text. and supply us with the digitized images of them are of things manuscripts we had some key requirements. He will be users of the site, how to be able to view these images in deep zoom those types of things do side by side image comparisons they compare more
than one management witness to the same text. and also to be able to combine images and text and see these together in some way, so to explore image and through text. These are kind of familiar requirements right So these are like quarter plan use cases these go right back to the very first supply of conferences. When everyone was sitting in the park live in Cambridge in 2011 these one of the use cases that were being discussed them right, it was how do we compare two things. How do we put text
and image together, so this is sleep in the middle of supply f's kind of core feature set right it's the it's the basis of the API and the presentation API. So can we just create some templates manifests fill them in a minute or universal viewer and that's our job done right. it's not really as simple as that it's all about process and workflow in this case it's not so much about do more than one. So the requirement was to bring together pre existing ti and apply if also heck we make use of docu
ment structure. to navigate supply of resources, you know I want to see this chapter of this group, and this verse in this manuscript and this chapter, I must be most valuable in that manuscripts I don't have to do that by paging through times and pages to find the page that i'm on. How do we model this interplay if and how we can leverage trip life in the workflow that we're using. And I think we've been going to split apart, two things that we can do a triple if right, so one is to play as a d
ata model as i'm as i'm going to carry over the information. And i'm not going to go into a technical deep level which apply outside of us too much, I think a lot of this audience is familiar with it. is a key parts of the two players presentation API specification, but we can use ranges and structures to model, the structure of these manuscripts. The intellectual structure, this document to divide navigable elements that people can use to go through this document and landed the right place. And
we can use annotations and see also as as two mechanisms to link together text and images via supply of campuses right so AAA have here is a really good fit for that data modeling Program. But also, increasingly, we can use to apply as a kind of process enabler right so we've got plans to change discovery now as a specification. That gives you a standard way of discovering what changes have happened to supply in some in some source of information, but I want to go and find out. What things woul
d be annotated yet which things are ready to move important to my discovery environment which things are ready to be indexed into search and so on, but I can do that will change this governing not have to write some custom API. And also, I want to have a view or that I can bring up and sure this thing, besides that thing at the same time interplay of contents the. Is your really good way of representing that to as a kind of process, so in this case reply encouragement and key enabler for this th
is process. to another hand providers with the transcript text itself, but it can also provide us with the structure of the document we can work out. Which Buddha says, which chapter that says, which person is an extract from that you know that kind of abstract structure of the document from the market that's already present. So what do we do, we wanted to bring these things together, just using an annotation process, we were experts to be able to go into modern as a crowdsourcing environment. A
nd linked the two elements this particular canvas or this region of a canvas with this element in the tbi we have some kind of simple autocomplete interface. We didn't want anyone to be exposed in that post abortion platform to either supply if, in the form of Jason or whatever, or the ti xml directly instead. You want to give them a list of books now since chapters and so on, they can select from and some nice way and then link it all together. So what do we build what we took I just rebuilt un
discovered environment that uses change discovery to bring in. We use the particular content state initialize the viewer and make reusable your eyes that can be that can be done seen in various places that would support comparison. We wanted to be agnostic about matt I think we didn't know what the metadata would be attached to these resources. They could come from anywhere, and we also wanted search to be flexible, we wanted to be able to find things, irrespective of the format that it was in t
hose things. We needed to store ti and convert it into a format that we could display alongside of supply, if we built an autocomplete API so we could extract the elements of the team structure to seed send to your transition environment so that someone could select the right team identifier. Within xml. So i'm going to give a quick DEMO of a few things and then come back to some more slides, there are some videos but i'm going to do some of this as a live DEMO instead so. I will go to the disco
very environment, so this isn't wildly different from anything anyone else has seen before, there are lots of discovery environments out there, that people have built around supply and resources. This is the one that was built for that project, although this is not a political context branding on it, but I brought in some supply of resources here. mostly from Napoleon The reason for that is there is a thing called which appliance registry, where you can put in change to stop any streams they've
published there's. So I just want to often just said, bring me everything from the past, a universal and it brought it all in automatically, without me having to do anything with it. And then i've got faceted search and bros I want to find things from Western medical manuscripts collection, where the language is Latin, for example, and I can do to find and it will do a little search and I will get my list of things. to buy the mistakes, so I can go and view and I am in the usual way and it will
bring up minute or and split it the the. memory of this thing right statements and so on, but one big difference between this and some other ones I might want to do some comparison. So i'm going to pin some things to my little shopping basket of stuff that I want to look at like is clicking this pin it button. And if I look at the top of the screen, I can see which things i've pin that things in here, I can view them in my little basket, this is the three things that I pinned a nice car just loo
k at these two. And then I can do a comparison and it will load the boat up like. This is a blank space in the right here that's for transcript text, and so on, but usually appears in this case, but it's using a slightly early version of supply of content state there's a slight. By go here to be 64 D chord and just drop in the URL part of the URL that was there when it decodes to. Is a blob of json, which is a triple A of content state thing that says, Lord these two manifests. And that's comple
tely like standard and there's that there's a little copy and paste button here, I can copy this link. or another view paste it in and i'll turn it back into things and I can see you somewhere else, so this was the kind of discovery environment that we made if I look at the admin ui for this. You will see. That. There is a change discovery streams that have been ready to hear this is quite small i'll make it bigger. But there's one for maddock which i'll show in a few minutes there's one for my
little experiment or get. Just that I put together and one from the bodleian and I told how often to synchronize within the body and it's doing it every couple things from. Marcus every 10 minutes or so and it will just pull those look for new content and when it finds an important enter the system will be part of the problems so that's the kind of basics for the discovery ui that we made on the ti site talk about the presentation. What we're interested in is making the team is now available as
api's be consumed by something else, so you know we weren't really interested in building ti management and I didn't to when. They already had some of it was built for the project, already been built before we even came on board and the project. They were very experienced at creating this what would What would you think about was hoping to use it in our workflow so. What we did was used the tea is the source of the structural elements within the manuscript, so this is where the boundaries are. c
hapters boundaries and so on, this is the identifiers that are being used, but also we could then extract that. and get texts from those was using the discovery ui display alongside the content and the structural elements were provided as an autocomplete on point for tagging resources and matter. i'm going to skip over less this is more about what we what we did i've got some links here is just very quick, so there is a little. autocomplete endpoint so the magic instance can find all the ti docu
ments that are available for completion. This particular DEMO instance only has one, but the LIFE project has many they have one for each of the management witnesses, so when you're typing your document you can say this is a ti xml that I want to use as the source for my my stuff. When the ti is provided as an API has a kind of autocomplete endpoint So if you look at the end of this link, you know Jane got one because I built. The Bible so that's genesis Chapter one and it gives me all identifie
rs in here for the ti xml for every verse in genesis Chapter one by could change this over. exodus one and i'll get the same thing right and these could be fed into magic proposed solution and always identify as matt back to the ti xml so that I know when I select that i'm annotating this content with this identify that i've been, in fact, the ti content much too much. So if I go into matter talk about model so model as Lisa was saying earlier, as a kind of crowdsourcing environment. It what it
normally does is it important to have like a shallow copy of that manifest so that you can annotate it was not new stuff. And you can add that manifests to one or more projects and then. i've got some videos about happening, but I might skip them because they're Marshall this little bit here, I used the blue summer management portal to find some Gutenberg bibles and then I just drop them into matt up from where they came from, in this case, this was done islands, I think. And I import that conte
nt as soon as this happens on Skype as well. But it brings them in and I can import that content and i've been wanting the islands, we did this, I did the same thing for this project with. goodbye bibles from the islands, but also from the body in from. lives in France, a bunch of them so inside model someone can select one of them's choose the autocomplete endpoint from the ti that they want to use to tag it. And then they can select regions on the canvas he's our whole canvases and mark it up
so i've got short video of me doing that here. So this is from the john rylance. i've got some ti for the Bible here so i'm going to go and find the folio i'm interested in. This is he getting burned by will. contribute to this. And this is all to apply if here and now this point, I don't need to know anything about the tea, I can just type it here exodus. And he asked me is this the book so i've tagged it with exodus, this is the start of the exodus and then I can add another section. And this
time I want a tag. One, Chapter one verse one, and it was a complete and it's about us is getting this information from the kenai xml on the back end so it's know it knows that. There is an identifying the ti which is X plus 1.1 and then, when i'm making the sanitation throw the box around that and confirming it it's been associating this trip, I was also looking at it ti xml is as elsewhere on the site. So i'm going to skip the video and. there's also a review workflow which is it was a video i
n the in the presentation showing how you can then review and approve these the model that they use on this project is that it's only when every. Page and this monster has been annotate even the annotations been approved by a reviewer that will be published, it will change discovery feed. But for my DEMO I just on the street with a complicated it I don't need to do, the whole thing the basic processes there's a magic instance. An Amazon discovery environment instance and they're connected togeth
er women's tomorrow's are is basically a change discovering. request the front end discovery ui is pulling matter and it's publishing a change discovery feed, with all the annotations that have been created an orbiter plant has been updated. And when the discovery environment finds any of these it fetches them it pulls them in and then indexes all of those annotations and to serve, so you can search for things by content. But it also transforms beneficial interventions, so the manifest don't hav
e ranges at this point, but it builds a table contents as to appoint ranges from the annotations that have been created so that the manifest that promote will be internal you can use those ranges elsewhere. And then Texas brings in the wi fi short text if it has it from the ti xml it doesn't store the to the text annotations that uses on the display side. That was a conscious choice because the the text that the actual project wanted display has more complex format in color coding, and so on, so
what we actually this way html I won't say butterfly if the annotations just been over. Here, this is the workflow so we've got important to xml into discovery, this was at a mother you annotate it and mother gets reviewed when the reviews are published automatically overstretching since nobody. Has wi for exploration, so this is the final part of my presentation during a DEMO the the change discovery by go back to the discovery environment no i'm just back to normal size. And into my box, so t
here is a little control at the top here that selects from all the possible books that are in that the xml and I want to look for exodus. and find three results because I annotate three things with exodus in the front, so you can see, there are three different bibles here, they all. Have the same plant but elimination so different I want Chapter one verse one and it will find me my three witnesses to exodus one one, and again I can know add these to my little comparison basket. And I can compare
and it will bring them all up. it's the delay here is dispatching the images from client services but also it will hit the ti it's learned all this stuff to the to the right, this is all the empty. That in transcripts the most of the elements of his Bible, because I was doing it myself, but if I scroll down there is on one of these view when I was at a time bye bye. bye bye just view one body and one for example. You see it brought in the next so there's the Latin text and the English text that
's kind of cool. that's come from a ti xml and it's just being brought in, and sort of onetable supplier, because they share the same identify action was an identifying them a ti xml then finally supply F and that's what it's used to demonstrate again. Going back to the presentation. kind of challenges. Things that worked really well to play up as a really good fit for all the data modeling side and for the process side of things. And matter the to what well for the annotation creation and for t
he basic review and approval workflow and synchronizing the two together to. Change discovery was pretty painless and what you want nicely. Challenges really were how much we make this a kind of generalizable platform and how much it was specific to just the particular tx now and project. Requirements for the funder of this project, we kind of own slightly awkward place between two of those things, so the front end is quite specific to them use. You know the navigation by brute chapter and balan
ced and so on, is very tailored to the particular thing a model of the project that they had, but the underlying api's and much more flexible eXtensible so it's pretty easy to change this and customize it for different types of content. ti obviously is a pretty big expansive standard there's lots of different ways, you can use it, so the site is quite opinionated about what the ti should look like and that's largely based on what the TV, I was like. it's pretty FLEX flexible about transcript tex
t and there's quite a lot of powerful things about text but it's quite opinionated about them files they expect identify us to look for something that could be changed, but for this project. We kind of conclusions are to play walk through the wellness kind of thing, especially change this government contracts, they alongside the presentation and image kpis. That you can bring together to play the I xml, which is a kind of format, the scholars actually using what's what's huge investment lots of.
Great stuff done xml and the annotation environments, of whatever kind, it can be a powerful group of animals do things together, without having to expose people to the nitty gritty of viability I xml or supplier. that's it for me. Question well. Thanks very much great to see, and I think you started and you may have answered this just for clarification started by saying the ti was already in place, so how much of your experiments suggested, maybe changing and improving ti for better incorporat
ion in the future. We didn't really have that opportunity I think you know, there was a lot of this was quite a long running project we came in through the project, they had built to link to enable equipment to xml the train people to create the ta ta xml we had made. additions of many of these manuscripts in that way, luckily I think the way that the chosen to do it was actually quite friendly to us. They had used bailing identifiers, that is a shared identifier scheme for all these manuscripts
that's not just us by this project, but by other projects that study the same conference texts, so in that sense, we were in a fairly good place so yeah we didn't have to. Work with a ti was was a you know, we had some limitations, but it wasn't especially painful that wasn't wasn't enough. Thank you. In a question regarding the transcription part of it in the ta so as I understood the ti xml transcript text gets transformed into an html document. Within the manifest that is referenced and rend
ered on the page or or How does that work, so this know what what I can absolutely show an example, but it will be. A link somewhere, the only problem is there's not a good Jason format or on this browser so the the. The result looks a bit nasty. We actually parse it into a big json structure that reflects the structure of the original ti so there, it is quite complex if I type in something like Jason Lynn So you can see it. There are all these child elements, you know there there, there are dif
ferent types of punctuation marks, there are different. You know they've got books and chapters, and so, but now there are you know this is intellectual direction or this. This is a gap that they've got very complex ti and we've moved past that and stored as Jason the original xml is also stored, but as part of that parsing. When we passed the the xml and create the json we also create an html representation of it that's stored here. And you can, if they if they update the xml and update with st
udents and that will not be Wednesday November. And that's what we display alongside the image, because he had a format and requirements that we're not. We weren't a good fit for us to have a web application we wanted something that would be, but we can also set supervisor elevation and it's plain text, in fact, the rate at the top of this. Massive thing this is one person this group so it's a very. there's a lot of the tea is very complex that there is a plain text representation, right at the
top here which i'm just highlighting, but you know so weak that that will come into fruition like that that's a very simple. But we have this way of requirements that went beyond just plain text so. Thank you. For one more question if there is one. signal alright thanks matt. All right up next we have Stefano Casa Su and David newberry from the getty i'm talking about us me progressive integration of an AAA off with new software services. sure. Everybody Thank you all the we're here to talk real
ly about the work that we've been doing in AAA over the past. Five or six years at getting. So we've been at the getty we've been working using AAA for about the past six years. We have sanderson was part of the team at getty a couple and brought it to our attention, along with Daniel assessment. We originally implemented this and what would we like to do in this conversation is talk about what we've learned as an sort of share out what we've done here. You know, good he's an interesting place f
or many reasons, but among them that were very museum where library, we do exhibitions we run you know interesting digital projects and we're a publisher and those are all places that use images, really, really heavily. And so there's lots of different ways that we can take triple F and see what you could do with it as the technology. also get a really thinks of itself as a place it's not just presenting those materials out but really serving as a laboratory the mission is really visual arts for
the world and support for the field and so being able to share back really fits in that mission of what we do. So, as I said, we originally implemented triple F back in 2016 2017 we had a level zero implementation at the museum there was another implementation at the archives rob with leading new thing loris. But as we sort of started looking at what we wanted to do we took those implications and said. You know these are two implementations there's probably six or seven other places in the orga
nization that we need triple if we need to take a step back and talk about how we do this as a whole organization, and all this, a series of projects within different silos. that's a That was a big project. required a lot of sort of development and architecture and Stephen a really here has been sort of the the brains and architecture behind that sort of architecture that we've done around triple if integrating with all the data work that you've done. I can't speak to it as eloquently as complet
ely as he can so he's going to do that. And now the inescapable architecture diagram. I kind of broke them up in in little pieces, so I respective to comma and we'll put them together. At the end, so this is something that many of you might identify with, so we have a lot of. data sources very different data sources, most of which we don't have control over some of them have nice API some of them have less nice API some some of them don't have an API at all, and we have to figure out creative wa
ys to to get data out of them. One of the main problem of these systems is that some of them, they don't really work well in a distributor or high availability environment. So we had to build a layer that protects the systems from going down and having our content managers sit on their hands for health, a day until you know we take them back up. This is what we call the Level one cash does it's a set of transformer code, we are a Python shop and, by the way, disclaimer I can't claim only a very
small part of this architecture and part of a team that does the triple F ending up linked open data so after architecture. So these transformers put very rough translation of the source data that plane json it's very close to the source, so we can actually inspect the source data to to find problems in the data sources and, at the same point it provides this layer of protection against against. system failure and an overload. Next, we have another set of transformers that actually apply the con
tent models that we have agreed on getting digital we use the dart for the link open data data and, of course, you know triple F manifest that's that's where things happen. So there is a another set of transformers they are orchestrated by what we call task manager, which is a. Python. salary application, which is it's a it's an event driven software that basically acts on on external events so when there is something something is updated somewhere. The past manager listens for feed of events an
d triggers actions that cascade into other other systems that you know updates systems recursive. And then we have the public facing part which we call the ld gateway and there is, as you can see, there are several gateways for different parts of our data, but they all federate into. into a common data source, so we can actually perform spark offers, for example, across the various repositories which is. We have one for the capillary which we will have one for the media, which is just the media
metadata the technical Meta data of media one for the museum collection annotations of all sorts now I think it's we're just at the beginning, for that. And one for their research collection viewer, which is the getty research institute collections. Each of these units as a triple store That said, we can perform sparkle queries against or we have a simple rest API where you can pull json ld from. and also an activity stream feed. And finally, we get to the triple F part, so what what's called gc
s in getting. Share image services getting commenting which services, I made up a name. You know um is a set of other Python applications that feeds off of the linked up in data so it's the very end of the of the transfer and pipeline, so we are as close as possible to the publishing to the presentation format. and makes triple F manifests as well as creating the triple F capable derivatives for the images. We also have a fairly complex varnish implementation that not only takes care of the cach
ing management, we have several levels of caching that varnish does the most. You know it kind of one of the most complex because it also takes care of bedding which images are open, content and which are have to go through another set of authentication steps we wanted that to be as efficient as possible, so varnish is a great choice for that. And this is all together so pulling it all together, we have the data sources to feed the Level one cash. That Level one cash outputs json that goes into
the lb transformers that output json ld into the ld gateway and you said yes feeds off of debt and the the raw images that large images from the data sources to make derivatives that are served via the. web gateway in the middle of it, we have the it manager, which I got stuck in a minute or get a girl so it's something that we use to. reconcile all the identifiers from different sources place will put great links between different identifiers of the same things that are represented in in differ
ent systems. We not only have a sore shared. Software architecture, we also have some production workflows that we have been standardized. Over time, so our partners, our colleagues that manage the contents don't have to ask where do I put this image, how do I do this. There is already you know, there are workflows for us, generating studio photography for mass digitization for marketing images, for example, when you create a marketing images you put any image you put it on open text, which is o
ne of those are sort of systems. We have a custom interface that you know you click on a on a checkbox is it publish to you know for drupal if and that triggers a series of events that are managed by the task manager to create manifest and the AAA images. We also kind of have an idea of where things go in terms of content modeling since we have very different types of content, we have to know what in each store systems. belongs in a canvas or in an image or in a range or in a manifest in a colle
ction, so we kind of laid out these very broad strokes content model, so we know that you know where what what goes where. And I went over that. And so you know. As you see, we've put a lot of time and energy into building out that sort of infrastructure that you use to serve up triple F images and we're really doing this. Because it's such a quote you know serving images and using images and presentation of them is a core to almost everything we do with the urbanization and it enables so many o
f the things that we want to do to provide public access. You know, one of them, I mean the most obvious simple thing that we do is collect you know we use triple F in the collections, we use triple F in the archival views. Empty zoom we have multi image, you know fighters here we've got the little thumbnails you click on a show it, this is that bread and butter of AAA what you enable using this sort of technology and it's really important to do. It also though powers things like download you kn
ow we know we need to be able to download images. Being able to say oh you'd like it in these three different formats you like to go look at the manifest to use it and other viewers, maybe you'd like to click directly into a comparison view what we call mirador because no one knows that means. That really gives us the you know all of this is AAA if none of this requires really custom to it on our part, this is directly, supported by the AAA infrastructure. And also, we also use annotations here
one of the things you'll see on our collection website is that we put different colored backgrounds behind different images here. And these are driven because we've gone through and annotated palettes on to each of our campuses so we know which colors are in that manifest and it drives the background of those objects, as part of that annotation process. We also use it, if you look down here we have a selection like a suggestion box for objects and one of the core suggestion criteria is color sim
ilarity. And so the annotations on these images are driving that selection thing behind the scenes, you know that nothing says triple life here. But because it's part of that infrastructure, the front end developer said oh we've got pallets we can do things with us and they didn't have to figure out beyond oh there's this data that I use. And also outside of the collections every image on at least the new portions of our websites the part that don't look like different 1995. is actually a triple
F image, so if you go to visit our website and there's a lovely picture of our campus that's a AAA image and, with it, this lovely photo of the Spice Girls on our blog, this is the AAA image. And this really empowers thing, so we don't need deep zoom for these images these aren't collection images they aren't research appropriate these not for the sort of research we do. But it allows us to do, responsive images, so we have different sizes of these images for different screen resolutions and th
ose come directly out of the triple if. We also know that for many of these images we end up having custom crops for those those crops or triple A of regions that are defined in the in the triple H API provides those crops directly. The social sharing images or triple A of images that are also derivatives. And it also lets us use the caching infrastructure that's definite built out to handle these different resolutions, so that when the front end developers say Oh, it turns out. I don't want a 1
6 by nine here anymore, I want to square image it goes right into that caching infrastructure and doesn't we don't need to recreate those derivatives, just because the engineers change their mind. This is not how we use it for collections, but this is really important for building out robust serving and Vivian is out in that web environment, as opposed to the collections and. we're also using triple Africa in our digital presentations and the maps group talk about work that they were doing acqui
re as part of the showcase today it's a digital publication tool that we've put out. And one of the key features that they want for these sort of scholarly publications is the ability to do deep zoom. But we also really want it to be preserved and not have to have you know dependencies on servers that will go down these are scholarly publications, they have to last forever I can't guarantee forever, but I can probably do better than two years. And so what they do is they pull the images in and c
reate a level zero triple if. server there, which is really just a set of images and folders. But that it allows us to actually use triple if in there and we're working on being able to import a triple F manifest from a server. pull it in and then clone it as a level zero implementation so that we don't need that dependency, but still, you have the power of triple if and. we've also discovered, one of the things people want is the ability to provide some level of interactivity and publications.
And we've said content state is a really nice way to do that, you have your image. You say Oh, you want to zoom to this part of the image you just put a link in there that's a comprehensive link and you trigger that interaction. On the publication gives us a standardized way that, even if we have to upgrade these systems in the future. We know what that metadata is it's not hidden in the code somewhere it's a standardized content annotation So these are ways we're using triple if, as a standard
to really power interactivity things outside of that traditional you know book reading or museum object display. we've also really found that the use of triple if the way we're using it makes it easier to work with vendors. i've got a great digital team we can't possibly build everything that the getty wishes to build digitally, which means we end up working with you know partners, some of them were in the room here. Being able to say oh you'd like images from our collection go to the triple F w
ebsite and read the documentation really clear about how this all works. If you have questions come and ask us and people rarely have questions in the documentation is good. You have no idea what a headache that solves for me because I can't write documentation well and our team's not great at it, but being able to appoint someone to that documentation says go and use this. week you know our audio guidance is triple if they'd have to ask us how to get images you're working on a digital publicati
on, for us, the digerati. You don't have to ask us how images work in our ecosystem Google arts and culture crawl their website and better images off of it, they know how to use triple A feminist at this point, it really, really helps to build out this ecosystem, because the communication gets so much easier. And finally it really lets us do what we call the weird stuff we build lots of weird digital interfaces. And they're really, really fast to prototype in this because we don't have to build
that new image GPS data ecosystem seven I was talking about means you don't have to build a data API. We can think about interfaces we can think about prototyping we can try new things quickly some things work some things don't but, having those shared standards underlying it, let us really, really work quickly and build out interesting things that we try. And where we're going next is we're beginning this collaborative project with colleagues the smithsonian. getty and Sony and several of the f
oundations came together to buy the archives of the Johnson publishing company, which is the publishing company that published the ebony and jet magazines it's a photo morgue of about 5 million for photographs. And we're doing it in partnership with smithsonian who has an amazing mass digitization program has a really strong supply of presence. And we're going to see whether or not we can actually build something where they have the images and we have the metadata and we use the shared standards
to bridge between these two collections. be able to share the burden of building off a site this large across institutions, using the shared standards. i'm sure there'll be new headaches that we discover in doing this, but this is really that interoperability that we've talked about. The ability to say Oh, we both understand this standard, this gives us a way to work together effectively without having to negotiate a major technical standards and. Having those the editors how long it's taking t
o get an agreement about how you would do this kind of we're going to build on top of that, as opposed to try to duplicate. It also We know this is the place we're going to use computer vision we're going to use crowdsourcing because ocr here really leveraging the annotation capability of this because. We have 5 million images and We only know the handwritten label on the folder of that image is going to have to acquire these sort of techniques and really leverage that annotation capabilities th
at we give to give us the metadata to make this content discovery. So we're really excited about what you can do with this kind of technology and triple F is going to be that core of the work we do. And so, as we think about the work that we've done in AAA if you think about why we're doing it is again. AAA is this enabling technology that lets us do the things we want to do easily and more conveniently was shared communication and shared standards. It really enables that kind of prototyping wor
k that we want to do on this and it reduces the questions we have to ask that each other's both of the people were collaborating with an for ourselves stephanie was a brilliant developer, is one of the team. If we had to negotiate how we serve images how we catch things how we order images every time we tried to build something. You would it would keep us from doing the work that actually matters. And leveraging the work of triple if leveraging the work of this Community building on things like
the cookbooks gives us answers to questions that we know we have to answer. without having to answer them ourselves so triple F. triple F is a cool piece of technology, but that the way in which it speeds, the work outside of serving images and really the work of reaching the world with the images and content that we have it, what makes it such an exciting technology for us. Thank you very much and happy to take questions. Thank you it's great to see the architecture diagrams but there really we
ren't enough blobs and more lines. And, but it was good to see it in stages to is do you expect that to grow more or integrate more or where is that going with the service growth. It will it will almost certainly grow we're trying hard to manage how quickly it grows because. The things are hard to keep up and maintain and also hard to convince everyone that they're essential. But yeah I think you know that's the that's the inevitable way these things go. And still follow on. And the collaboratio
n with the smithsonian and if you can say more. they're more for source in smithsonian had some collaboration to just interesting to explore models were not every all the eggs in one basket, so I don't you can say more about the split with metadata etc. um it's really early stages, it, but if it if it continues to proceed, the way that we both feel it will work, it really is. there'll be responsible for working from the physical objects through digitization into the dams and in the triple if eco
system. And it will be inserting identifiers for every object, they scan into our identity management system as a crosswalk to those images. And then we'll take those those images and those identifiers and build out records in archive space and in our arches sort of item level cataloging system. To expand on the pixels so that we don't have to think about the physical objects themselves that's really smithsonian's to two pixels. We do pixel you know we take those pixels and we add metadata to th
em and intellectual structure and build a discovery and environment, on top of that. I may add something that I realized got kind of booted by from the presentation, you know yeah it manager is actually part of the future proofing strategy. You know, in order to keep things together, also another part is the redirect service that we have where, as if you, you know they will eventually sometime. deprecate or you move resources, you know their identifiers are not always permanent so we built this
redirect service where. You can generate redirects to either another resource or two for 10 which is gone rather than not existing there's a slight difference, so it is part of maintaining a long. Long leaving architecture is this one, so you will be expanding your architecture diagrams they've been for by next year and I was 2:30pm so I couldn't expect much. I just want to confirm something I think i'm impressed by by want to make sure. It was it goes, all the way back to the first one of the f
irst couple slides and maybe it's good for you, it looks like you had a whole service part that went into json. And then did out deep processing, so does that do like expansion and compaction and does it check on the vocabulary and if it fails as a totally user. Just kind of curious because you had a whole thing that's focused on the ld part and you know, most of these projects that's what people skip right over it, so we do all the processing. I think a lot of what we do, there is, we know we n
eed a cache of this or systems. we've taken down I think every single production data repository that we have, at some point i've gotten very good at apologizing to the content, managers about that. um but that really represents the information the model that the editors think about it for those systems, think about. transforming it into the models that the presentation would think about or that that does is in our mind that transformations mean just json that's dumped out or something like tms
into something like link art. So it's you know, yes, it adds context, yes, it turns into rdf yes, it makes it available to go into the triple store, but the important thing is is going from the domain model of. An information management resource, the information of a publishing resource. And so, when we think about like the level ones are how we think about it behind the scenes level twos are how the world thinks about it as objects or people not, as you know, constituents and tms or you know ar
chival resources. I wanted to follow up a little bit on that just because there is sort of this division, when you when you suddenly when you decide that. every sort of forward front facing page on the website is going to be using these images and there's a cultural there's a change that has to occur and i'm wondering if you could speak a little bit. To that to how that occurred that maybe the timeframe in which it occurred and sort of some of the motivating. Some of the ways that perhaps you mo
tivated the the your colleagues to adopt that. Part of the work, there has been working in the systems that they work so. We, how do we, how do we insert the ability to do the crops inside the content management system, so people don't have to learn a trip live annotation. How do we say in the dams there's a checkbox that says make this trip life so as close as we can get to those existing workflows the better because. Infrastructure is easy change management is hard and changing the way that pe
ople work is the hardest part of everything we've done, I mean it took. It took three years from the initial architecture diagrams to getting to the point where people actually use this some of that was building a lot of that was figuring out how to communicate that effectively with the rest of the world. And a lot of it was also focusing on not the cool technology that we talked about on this room, but saying we know mobile is more part of our website, we need mobile crops. we're either going t
o have to make, we already are currently making 25 derivatives for the different versions. Do this the next version of the website we're going to have to add another 25 for that do you really want to do all of that, or do you want to learn. You know how to create crop regions and triple A and so by being able to leverage that you know, this is the benefit that you will see in your clothes and in the public website. And we don't talk about trip left the content people very often because they don'
t care. And they shouldn't have to care that's not it's not a technology for front end content editors it's an enabling technology it lets them do their job more easily, as long as Stefano and the rest of the team can find ways to smooth that transition. I was curious if you are using features of the triple if model, apart from cropping and and image sizing for the normal parts of the website that is not the digitized artworks. i'm. Currently we don't use the manifests on the website it's on tha
t core website, it is the image service and so we're using that set of technologies. I will say at one point we built out a xen version of our website for fun that uses the by total image feature of triple if to make everything look badly photocopied. I will also say we didn't work very hard to make sure that didn't hit production. But no, it really is mostly that image service and the work that it enables there. we've talked about whether we should use manifests but that gets back to the questi
on CARA was asking training people to generate manifest for things like the slideshows on the presentation or. Is it this point sell to higher barrier to pull those it I think when we start doing things that require a complex presentation models and we'll see if we can find ways to leverage more features of there. And we also know that you know as we roll out more and more of our video features, the the triple A of three video stuff particularly around captioning we're going to have to implement
that some way, it makes sense to do it via triple if, when we hit that point. Yes, big enough triple F features wait for for some for a period we published some people have collections, which we had sort of a mixed. Experience with, especially with this port from different viewers so currently we are only publishing. manifests and you know structures within the manifest like ranges and that, but I think that collections could add another you know different the year of. semantic layer that we co
uld use the in other different ways. One more. Thanks um so thank you back on the collaboration with the smithsonian and the large number of images that are coming out of that and also the comment around discover ability and maybe I. don't know if I heard you correctly or not, but seemed like you were alluding to potentially automated ways of. generating descriptive metadata didn't know if you can just serve speak to those workflows or what you're thinking is there. The thinking is that that's e
ssential because we can't possibly hire enough catalogs to do 5 million images and so we're going to have to use either the Community or robots. How we do, that is one of these things where we're looking for best practices from the field it's that we know we must do it we don't really know how to do it. And I think the other area that we're really, really sensitive there and thinking about is, as we use these processes, how do we, how do we communicate the trustworthiness of them as part of the
interfaces that we build out I have full confidence that the robots will do is. A good enough job to make it useful and discoverable, but it really is particularly for collection of the sensitivity. I think it's going to be an area that i'm going to be thinking about over the next three to five years it's going to be about the user interface user experience around trustworthiness of data. I robots are good in computer science moves faster than we will ever move, but that communicating trust is r
eally the thing that I think is going to be most critical as we figure out how to use those technologies. Alright, thank you, David Thank you. All right. you've heard his name invoked in the 3D conversations and he's one of the founders of the commons but i'd like to welcome ED silverton to come talk about his exhibit project and some of the uses there. Okay, so. i'm in silverton never seen. In bryson UK i'm here to talk about the exhibit project. yeah so exhibit was a project that started in 20
20 we were approached by the University of St Andrews. After this up started first lockdown because they the students are started working at home and they had a kind of beginnings of a jewel delivery. program where they want to kind of teach on campus but also remotely online and they've also been had a large kind of 3D digitization projects ongoing with them museum at the time and date heard that I know a little bit about three days. like that and yeah they. They they asked me for Ben carson's
bond, giving us money as well, so we're looking to. build a tool that enable this kind of engaging for their students but also for their pricing as well. And there were to use their sort of existing triple F catalog. And people have been kind of creating stories with it using triple activities and building models. So. This video is not working. Some PowerPoint. Press play. So this is the. Online catalog if you saw that with the Ad to exhibit button it's going to be used this record button to rep
lay that as the spit spit quick. yeah use this record, and this is like a nice. collection and an ad to exhibit it was about workspace if you're the person that and then it jumps out to the exhibit the so tool. And it passes in the trip live manifest all collection, so you can pass in a collection and it will use every manifest in that collection and they put in your kind of messy data. very basic message later this week. and create and then the universal viewer on the right. We be able to view
because have kind of capability already to load images and truck manifests and 3D models and all that and also crucially, if you if you're loading up. A book, you can page through it and find the page that you want, so this is the kind of basic user experience you've got a kind of a list on the left. And then you can create you can zoom and describe to kind of talk about different aspects of the image or 3D model and then you gotta kind of slide presentation, but there are there are other types
of presentations Now you can make with it. So. yeah quizzes we recently added this, this is a kind of highly requested feature. Various sources actually shortly after we launched the projects we. Get requests, they say at the top there's a there's a tab button list so you can switch the quiz, and this is a non destructive action, so you can change between templates as much as you like, and you know you're not going to lose any data. So i'm kind of. Creating a kind of you can see question and pin
points the tabs in the boxes change, depending on what type of template it is so i've asked the question, there and I can type in answers and as many as I like and. You can mark which one is the correct answer in the chat box we've also all correct answers, or you know. You can reorder them as well and Edit them obviously Lisa do you need to. So marking that one is correct and I got a pin points, and you have a kind of a collection of pinpoints these annotations with a pinpointing as a motivatio
n. And then you can Double Click on the image to create a pinpoint and it's exactly the same with 3D models they look identical and then you can reorder things as well just by drag and drop, and you can you can intermingle questions with regular slides as well. You don't they don't all have to be questions. So if I click next I get a kind of an incorrect, please try again and. pick the right answer and lets you proceed. You can also. click on the pinpoints. to select, which is why what's both bo
th ways. So another feature we added this is proved to be very useful as duplication sexy duplicate. Is press play again definitely make these automatic so you click on duplicate and that has the same effect as our to exhibit it's the same things. going through the same process, so you can create a copy of it. And this is quite powerful because the lecturers salaries, they. They can kind of pre populate an exhibit with all the other stuff easily from the catalog and then handed off to their stud
ents and then they can remix it and reuse that the students kind of take some of the effort of them to kind of go find it. So yeah that's that's useful feature. Otherwise possible protections is kind of recently. added. it's a bit like vimeo so it's exactly like them yeah yeah. it's really talking about search. By then click preview and then openness in anything with me so tab. tuck. It into the boxes to. So. yeah all the slides and then, finally, for that, for the kind of education slash blende
d learning section, the presentation, just to show you that there's you know there's also a storytelling storytelling template. and his is a scan of a roller skull with a collection and it's kind of mixed with images and things like that, so you can kind of just tell a story, you can zoom in exactly as you would with an image. On a piece of the part of 3D model that you that you're interested in and, as I mentioned earlier, I had to basically invest my own annotation standard to save the databas
e. So yeah you got to. come up with the nicer to work on that with everyone, you know. Something everyone can agree on. yeah so the next section is so that's the kind of teaching and blended learning bit. he's gonna see just the straight up kind of online engagement. stuff with it so. This is an example of rope have any any museums in pricing they run they run all of the museums and city of Brighton and hove and they've really taken to exhibit they absolutely love it and they've been rolling it
out on their on their site and. This is it on their site even be they did a project recently to digitize some Chinese wallpaper in the billion. And that using it kind of bed these a bit as you would the universal view it's exactly the same principles in their blog and then you can kind of tell a story that works with all the templates that could be a scrolling second one, it could be a quiz. or the other one i'll get see. So yeah that's that's kind of embedding and this goes on and on and on it'
s actually really interesting this one has so much going on in these images um so. yeah campaigns. Dear this thing fits. I should have tested these slides that were on this computer. Oh. yeah. So. yeah. Sorry. yeah maybe. Just. yeah that's the real shame, because this is a good bit. anyway. yeah Is this all on one drives PowerPoint but idea. become i'm stuck there. Hello videos. We run the PowerPoint directly. yeah I mean. well. So as to say i'll just do a live DEMO. brush. Account because it's
it's it's on a staging server but, basically, you can add you can add YouTube videos now so British Library public library requested that, and you can do the kiosk. presentation, so you can you can press play, and it will it will run through on a time and you can set the time the every slide and. it's really cool because they've they've got it set up in that Museum in situ on screens wish I could say sorry. yeah. Anyway, let's see I guess. there's there's there's a bookshop coming up where I thi
nk if you want to learn more about this, I can show you. Trust in the PowerPoint. yeah. The philosopher's in there. So Oh, and the other the other big part of that so. everything's built on Google on Google cloud stack this is firebase and. that's you know proven to be really, really useful very nice to develop for, and if you have a Google account, you can just pretty easy to just set up a file based project I can deploy it for you, I will set it up you've got your own instance and. You then ha
ve because you've got your own instance you've got all your own analytics and basically everything that goes on in. When people are using exhibit it's it's tracked every event tracks and you can see, was quite fine granularity exactly what how people are using it, which is a big deal for St Andrews, as they needed to demonstrate that it was you know the money was spent well if you'd like. As as with rockabilly and as well similar kind of thing. And yeah also the exit of shopping Google search re
sults well as well, so it's very it's got all the Google juice. yeah. i'll. it's just a clarification, the self hosting this it's still based on Google and so yeah so if you've got a firebase account, you can add me as a user to log in and set everything up for you. yeah but. The British public library, and you can you can also kind of. You can turn pictures on and off as well, so the YouTube beach is it's actually behind the complex, they can kind of turn that off if you don't so we want that,
and you can also. override all the text and styles and everything like that we were on we did a project for the pricing festival, where we changed the box and everything. To make it fit so when when when these are embedded in their site, it looks just like part of the rest of the site so. Okay, great thanks. hired so you skated right through it, but pinpointing motivation yeah I liked that one What did you did you I double checked on web irritation that's not what they listed you move one like d
id you extend motivations with that motivation yeah nice also you like extended framing so. When you position the camera around the 3D model there's nothing that really fit with that. So I thought you know framing makes sense to kind of. change that. Later on, we arrived at something standard. Other questions. What is the workshop that you'd mentioned. I think you might have an old slide deck so it was it was on the. 29th if you go to aaa.io slash events, I just want to tell you the wrong date t
herefore workshops that week and it's one of the four I think what's happened is one drive I think one drive hasn't synced or something. yeah. Microsoft. But maybe it's a process unity, or maybe I can give you a deadline and. This is why it really guys oh yeah so. let's try and like find the YouTube video. Using that. Welcome to me. YouTube video and. British Library have their own. YouTube channel to that that they using. So yeah that's that's now added to the exhibit. And this is the TV on the
right and the evie evie for has just been released, and it has this whole new concepts of. Content handlers. So. we're excited about Well, this is kind of blowing the doors off in terms of like what what we can do with the project, so you know this this tab down here. The triple aim is now one type of content that the can load, so you know if I click on YouTube there's it's gone like that up, but you know that little tabs POPs viewers, you know, this is, this is really important for the movie p
roject, I think, because we got a lot of. Complaints about like Why do I have to put my PDF and manifest and if if we can you know get a dedicated PDF extension, but that's what people want, so I kind of corporate use case that keeps coming up. So yeah so I can then. put out a comment in here. I can. select the duration like you know part of video that i'm interested in tells me, you know adjust it a bit. Okay. Then fingers crossed. So hit play. Baby skin to skin contact, where the hands connect
. A sense of the spread or closeness of the fingers and thumbs around them. So. um. So yeah that's what they've got running on a kiosk machines at that didn't happen museum and it loops or this loop forever and ever. Actually 3D models or images or videos or whatever. It. Is there a way in exhibit to export out the. Jason or whatever might be driving this was a couple requests import and export of annotations that keeps coming up, and definitely want to do that yeah and. How triple ISP is that.
So Another thing I don't think we've done is. Where. She didn't share. Is this Jason option, so you can kind of access the kind of Jason the the tool users, but the UV is there's a little bit of kind of magic going on it's not it's not sort of. Standard standardized it's been stored in the database in a sort of standardized format by the YouTube annotations I just using the same time selector but the UK at the moment is quite sort of. Initial support for for annotation so. it's you know it's a r
eally stripped down version of it so but. there's nothing to stop is exploiting that Jason and you know the date is all there is for any format you like, essentially, but yeah right now it's news bit more a little projects come along low where we do the import and export feature, and then it will be standardized. and I have to come out and the big fan of exhibit and. I pass it on to several friends and in workshops and, recently, a friend of mine that Oh, this is brilliant, but you know would be
so great, if he just could put this. presentation is this slide view on our. On our block system, then I went to the share function and eventually embed function was there and it was really just so. It match exactly the expectation, or even overcame the expectation, I wanted to ask you something about the usage, since you track the the audience or he might have an idea how it is how widely adopted within some institutions I. Think last time I looked at it there's like 7000 users or something li
ke that on it, but so but in. Bruges library of the first people to. Learn instance, but Sanders was throwing instance as well and they're going to actually so that's important point that. The back end is firebase that's the past or that was real time database, so when when you're editing all this stuff on the left and you're just dragging these around that's it kind of like real time but David so David so fast. And so, all the back end is hosted on the Google kyle stuff but the front end is act
ually hosted on the cell. Which. Is this. and hope, the hope is built with next jess was built this is by the cell and they've got a really nice kind of deployment platform and. But because the front end is decoupled from the back end like that it means you can kind of put the front end wherever you want to put it so Sandra is the they want to put on their own, they want our next guest on their own servers so. And do you have an idea about the ratio of viewers and people who actually edit their
own exhibit. Because you make it really easy for people to to see Oh well, that's brilliant, I would like to duplicate it or just create my own, and I think the gap between. Consumer and producer is quite small yeah there's there's a kind of a real tight kind of feedback loop going on there is this whole thing is the line okay make this as easy as possible people to use and i've got a credit my partner Sophie for all the kind of amazing. ideas she had around ux and things like that to make it li
ke that, because i'm a developer, not so much kind of a ux person and. That. yeah. So you mentioned a couple things that are sort of in the works or that you know are coming down the pipe and i'm just wondering. Right now, what your prioritization is for all those sort of moving pieces in terms of what to implement next so one thing i've got to do soon as long. rope rope rebellion i've been testing it with teachers and bryson and that because they're interested in having teachers views that coll
ections. Lessons around it and they've had them using it and collecting feedback from them and they put that in a bullet, you know itemized list for us to go through. And one of the one of the big things is that we want user accounts yeah so so right now, the way the way it works is it's kind of like a Google Doc where you share link to edit and Edit URL or view URL. And the embedded URL means it's just like a Google Doc or anyone that you are all connected and that's that is actually that was a
ctually a feature for St Andrews is not because we didn't want to make the login conventional organ system is busted because they use cases. That anything that students use their basically responsible for and the gdpr and all of that it's a whole minefield so they just this this work for them. Other people like Holloway university within a particular yeah logins fine level, so it depends on the institution or the policies but yeah there's nothing to stop adding these conventional user accounts t
o this as well. So that's that's one of the big pieces that are so there's a there's a whole there's a whole laundry list of kind of ux tweaks like this needs to be more pake we change the bomb. me one last question here. So so as most of this experience surf static Lee then it's all like on metal fine and so much going on, yes. So never find yourself kind of quibbling potholes really in many ways it's just it tracks, the github repository and whenever there's a deploy to it when I was an update
deploys it the cell yeah. And you can have your own custom domain, if you like way either way so. monk, who sold on that's the that's where Bruce you've got their own you know got their own personal account and i've deployed to that and then, but you don't get the it's kind of a simplified version of the site and he just created there said. But you know that's quite a convenient way way to do it, but you could equally this created which the domain and the point that it as well. And thanks that.
Alright, we are now ready for our tea and coffee break it's set up right outside and we'll reconvene at 4pm and rocky i'm gonna send you. A minute or two. yeah. feeling better. All right, why don't we. get started get settled little folks are coming in, for the last minute here but we're in our last section. Before we head out to reception out on the quad and so, for the first presentation is very happy to introduce RON cider talking about using triple F images and visualizes you're all set rig
ht yeah. yeah Thank you. yeah so we're wrapped around in the turn and entering the homestretch for the sessions today. So i'm small but i'm going to talk about well look familiar, because this idea of using triple if images for digital storytelling has been touched on a few times earlier today and, hopefully, I have a little bit of an angle or twist on this that. That that will be useful and. So first, first let me introduce who I am and then we'll jump into the guts of the presentation so i'm i
'm. Around snyder i'm with the J store laughs team J store is part of they get the QA of family of organizations nonprofits that provide academic resources to higher education institutions, primarily, but we are we're doing more than that. These days, but that's our primary mission. labs has been around for going on eight years now, and we do a number of things our two biggest projects currently are providing. J store content in access in prison programs providing general content inside of a pri
son education programs, and also a text and data mining platform Those are two. flagship projects, right now, but juncture, which i'll talk about today is something where we've used a lot of triple if a lot of linked open data. To GEO spatial mapping tools and try to merge all those together in a hopefully an engaging and interesting application of digital storytelling. So i'm going to jump into or i'm going to talk about the process that led to the creation of this juncture tool set. reflect on
what we've built in some lessons learned and then maybe look ahead at some changes and some things we would do. In the next version now as these things typically go you learn things as you're going in and as much as we can we try to take advantage of that and use it going forward. So the origins of juncture go back to one of the earliest labs project, and this was a project called the livingston is NBC expedition. So I listened to tons of journal articles I J store also has a key store plants a
re global plants organization or site that has something north of. Two and a half million type specimens and thinking about how we might do something interesting with those types specimens the livingston expedition project cannot add a number of the type specimens that we have were collected by livingston and his his botanist. john Perkins think his name was. In RON 1850 in a couple of expeditions of that took place in in Africa, so the that the project is now defunct where there's some. there's
a video and then some screenshots on our site if you're interested in that project, but the basic idea was that we had a. Current day maps we use leaflet and all the infrastructure that goes with Nicholas. We had a couple of the digitized maps that that were created on the expedition so the maps from the expedition for digitize and us as historical map overlays on the current day map. And then we put pins on the map to represent the time and place where the plant specimens are collected and use
r to click on the specimen and bring up your high rez. viewer for the for the Nice type specimen image and there were some other things, we did, there were a lot of correspondence law of correspondence between the expedition and and and. Their collaborators and we've got images of the letters and other things that occurred, but the biggest thing was the type specimens and trying to put those in a time and in place on the map. very interesting project and i'd always wanted to pick another run of
the Act and do something more elaborate and take into account more and more types of tools and we had an opportunity. In 2018 with this thing that's referred to as the plant manager, so this is like a pH idea, but what about the overlay on it. So that that's that was sort of the start of the of the juncture tool that i'll demonstrate and then sort of in parallel with that a researcher or Professor from. From kenton in the UK contacted me about this idea of wanting to put together a site basicall
y a collection of visual essays related to Dickens and other. authors and notable people from from the cafeteria nettie kind of evolved into a much broader treatment of the. Of the camp maps projects are so that too so both of those things that into our idea or plan to open source junction was we did at the end of last year and and that's what i'll. demonstrate. So damn busy project I mentioned this was where we started with the idea of this digital storytelling playing mandy's project, this was
a melon funded project ran three years. And the initial idea of this project was people's a mapping project, there was a continuation of that as an easy project that I mentioned. And as we started developing this and doing some user testing we quickly found out that we needed to do something more with images we certainly had these high resolution plan specimens that we needed to show an engaging and interactive way definitely diepsloot sort of. tool was needed and we use a non triple if deep zo
om viewer initially, but as other types of images were brought in to the visual essays. We had to kind of rethink that and that that was the first time to triple if appeared on my radar. And, looking back I wish we would learn more about it than jump into that in a more thoughtful and. planned way it was sort of an incremental thing we started adding features and doing this and that and what we ended up with well it works and works pretty well it's have a Frankenstein sort of implementation and
and to do this over again, I would would definitely. approach that differently, but. But That said, the the triple if idea and capabilities provided through these external tools or just a real eye opener and open up a lot of doors on these visual essays. So i'm going to. Get slides right, I had these as backup in case the DEMO didn't work out so i'm not going to go through a static version number and try to jump to the site and go through one or two the visual, as is the kind of show the idea he
re that that we have, so I don't hit the escape key. Right. So this plane manatee site. Currently, has I think somewhere around 18 to 20 of these these visual essays need to the essays as a colleague on the front page with. An acm a general link to the to the essay, and these were created by some postdoc fellows that Dunbar noakes head on staff and they've had a number of these these groups come through over the three years. And they have authored many of these ones are actually authored by by g
uest authors very prominent people in the field about me but i'll pick one of the ones that was recently created by one of the fellows this might be the most recent one on maze. So the idea of the visual essay is that we have. that the text is is that the key element so so the the essay visualizations the. triple if images the maps and other types of visualizations are there to provide context in depth, but the the essay is really the key here. And these aren't quite peer reviewed sort of articl
es, but but they're scholarly in nature, but trying to reach a broader audience and and then there are proper footnotes and attributions and that sort of thing, so these were written in a very thoughtful sort of way. But as you scroll through the text on the left side the fish analyzers will change down on the right, depending on what the author wanted to accentuate in this particular section, so this is a a viewer that has three triple if images. These aren't particularly. In engaging terms of
interact in your interaction, some of them do have the ability to zoom in and kind of using annotations do stories or exhibit io sort of the story, within a story sort of thing using the triple if. images. This is one where. they're trying to compare these two images of someone not exactly sure what she's trying to contrast here but, but we do have a. couple ways of doing this image comparison, we got the side by side mode, as we see here and there's also a step mode, we put an image, on top of
another and then we be or hide the one on top you kind of want to use your slider to. Put that back and forth other thing i've mentioned here, I meant to show this one earlier paragraphs, but one of the things that we also do is incorporate link to open data so you'll notice, here we have a. hover that brings up some information about this particular educational as an organization or the company in this case, but it could be a place where a person. And this data is pulled dynamic people would be
data, so there is a que ID associated with this paragraph and the tool go through and look for references to. That entity that is associated that Q it so, and so there are some matching of of tests and it's come up fuzzy matching so it doesn't have to be exact match, but using the Q ID. You can go out and grab the aliases and other texts from the entity and and do this to some dynamic matching of the Info box. here's a case where we've linked an interaction with the image so clicking on one of
these items will cause the image to zoom in a particular region. here's another one, so this, this is a case where we're using a both a geo json overlay and a marker pen to show a particular location on on this map. more of the same to scroll down and each of the paragraphs will become active and then one or more viewers, in some cases, there have been more than one viewer associated leaving find them just to kind of see what that looks like. here's another viewer so that this has a bringing in
a third, a third party we were, this is a night lab timeline viewer. So the author is created this and link this to the to the same. Here here's one of the Nice pants specimens from chase store bubble plants, again, there are three in this case, and these are. run through open see Greg and triple if you're. This, I think there's one of these viewers for this, this is the video so you can be in the video. Right, so there aren't any number of viewers, far and away the triple if images are the ones
are more most commonly used, and as we look forward at a next generation of juncture. I want to really lean into that idea of using triple if, as a as a key element in the USA so that's the another another one with a bunch of GEO json overlay so we use this. who's on first link from again using the wiki data, so the the author will identify the entities that they want to use in a map and then we'll pull the the who's on first to Jason overlay using wiki data from from Busan first sight. So well
there's a lot of interesting things going on the background, the authors don't have to worry about that too much they need to basically do a couple things I need to. identify the the entities are the pure ids that are whatever the essays about identify the about this of the essay With those few ideas and find some images that they can use for why use those are the primary things to do real quick quickly jump to this camp map site. I won't spend a lot of time on this since i'm sort of. running i
nto a time issue here, but same idea, there are a number of essays in this case, they have a. I think they've got probably 300 of these so they have they have these categories that that you can drill down into that provide these kind of themed essays but it's the. same idea. In this case, we have a map. This one used to have a nice historic map overlay on it, I forget who it was meant mentioned map warfare and back of that process now defunct and the. sites that were supporting that area and tha
t's a it's a gift to pay for your kind of deal it's a free site we're using it work nice but for some reason my story map overlays are showing on on this. prize a nice way to kind of zoom in, and this also, in addition to zooming in on triple if regions, we can also use the same sort of interaction to click on something in the text zoom to a map regarding. maps as well. Okay, so let me jump back to the. To the PowerPoint or the Google slides. Talk about that that this was a collaboration with th
is professor of Victorian studies that led to the camp map, so the idea was that they want to create these sort of standalone. essays but time together, you know, a website that could be a search from Google and then access and such so. The juncture framework provided a nice way of doing it provides a way to. get into the the architecture injunction or maybe. Right after this one, so this this is sort of the idea that this is what the author she's the author sees. markdown files and those markdo
wn files of story, so the whole idea is that there's a markdown file, with some custom markup that define the viewers to be created. The author Craig stones and under the covers the framework will do a number of things, or even triple if certainly leaflet brings in a lot of images from various sources wikimedia commons being probably the most common one. D3 visualizations. So it does all that sort of transparently. To the to the author they don't need to worry about them. So this, this is one of
visual essay looks like let me go. So mark down I know it's become sort of ubiquitous these days, most people that are tech savvy have encountered markdown in one way or another. And even if they have learning markdown doesn't take more than 1015 minutes to learn the basics so markdown was our choice for the. markup language to use for the visual so as we start with mark down our users and format, the text is a mark down to the degree that they need, and then add any number of. tags that are ti
ed to particular paragraph, so the way up a tag is associate the paragraph is that it basically is. contiguous in in the text with what the paragraph, so this this example is this a sort of a. visual essay Hello world sort of example here, this is a very basic thing but shows that the key elements, so the key elements are. This is not required, this is optional, but this is typically what a user will want to do is add a one or more Q ids that express the abundance of the essay. defining what the
essay is talking about and that can be leveraged in a few different ways, of course, the markdown text. And then, in this case we have two images one is an image that does not have a tripwire manifest natively. So what the tools in his essence doing is dynamically, creating a manifest using a few attributes that are attached to the to the tag so in this case, we have a label and a license and attribution statement that will be pushed into a manifest and then displayed in the viewer. The second
image does have a is something that exposes a triple F manifesto this comes from Harvard and they've. already provided a number of. triple if images, so we have two images here, then this last one is a map and the map. takes a few attributes the main ones are the Center point and the zoom level. And what them what the map does is it creates the map and then it looks for references in the texts. And then creates pins or, in some cases to Jason overlays on the map and the way it does, that is, it
looks at these Q ids and then scans the text and find references to the text that correspond close to. me this is time for another quick DEMO i'll show you what that. can make a mistake here I pushed the escape button. which was my recovery action here. Right click. yeah maybe. I don't want to. send us down some rabbit hole, we can get out there you go Okay, thank you. I was afraid my muscle memory would just push the escape button okay so. We go to a github repository that I set up with a coupl
e of real quick demos, so I mentioned that. visual essay is nothing more than a markdown file right now. We only support github I think in a future version want to be able to support other places to host is markdown files, but github is convenient for a lot of reasons, we have the version control, and I know the things that go with github and. github pages, if you want to assign your own custom domain to your to your repository, then you resubmit got a website that visual essays that interest to
something other than our soccer domain, so the the one I just showed that. click on this here we go now we're taking that markdown file and turn it into a visual essay this is that Hello world SA. So I mentioned the tool will take the queue ID and then do this dynamic linking So if I go down to the second paragraph you'll notice that we have a. marker on this this map and marker is associated with this term, which is associated with whatever the Q ideas for for a multi there's also another one,
for you know for this other Q it we've had this we get these nice info boxes, we get the geo coordinates that are put on the map. Again, we have the ability to do this. We got the Info box and and we can. Do this zoom in interaction to if you want to kind of zoom in a particular things. So markup isn't too horrible meaning you get those those tags, but one thing I have. Looking back and thinking about what I would do differently if. If I had a chance to do this over again, I would. Go back to t
he PowerPoint. This is the current markup so what. I would like to do is simplify this markup that there's normally no reason to have fully formed correct html tags and everything so. Users that aren't familiar with this invariably will forget to put a closing quote or an angle bracket or what have you so and the other tool doesn't do a good job of reporting era so that's that's a source of frustration that that users. I you know I take personal responsibility for that, because that was my choic
e to require fully formed html but there's no reason to do this, so the next version will have much simplified tagging mechanism that don't that doesn't require that. i'll give you a quick example what that looks like the knowledge jump back. So here's. Here we go, this is the simplified version. same as is going to more or less do the same thing, there are a couple other differences but i'll talk about that in a second, but this is the simplified version so we've got. The header which I didn't
talk about, but that puts that that banner header and you can also put some navigation links and that sort of thing in there and the simplified tag and also the simplified. manifestation reference. This is another thing that might be of interest to some folks in this room, so in this next generation prototype that i'll DEMO maybe not running out of time, but but. provides the ability to self host triple I have images in github and then you can use some conventions to identify the label and thing
s like the the. The licensing information, and so you can create a triple if image from github and that's what's going on here i've got a personal github account from that that i've put an image in and that that will be used for for this second version of the essay. and especially by just do that real quick. At the same time, sorry. yeah okay i'm. gonna go back here. So, this one is. This this this sort of prototype of second generation simplified syntax. i'm using this github sort of hosted ima
ge, this new version as a lot nicer way of showing. licensing and attribution information i've got this side panel that that you can bring up and provide some basic information, you can. associate a que ID for what this item is depicting, which is very useful for discovery I haven't done this yet, but my plan is to take all this and be able to do it provides me surge pricing searching for images about the particular location, based on these PICs you ids. same business with the zoom to do that, p
lease. Alright, so i'm going to try to wrap this up. So i'm around all day tomorrow, and this afternoon, my colleague Julia. Who is my partner in crime here will be around her question we'll talk about this more happy to do that. Show. I talked about. So the architecture, I think we talked about that. Will flash up the look at Terry. sharp correct Okay, so I think we validated the utility of providing markdown as a way to create these nice interactive essays and create websites and based on. pre
tty much of these together, I think that that idea has a lot of traction. What needs improved on the authoring tools right now there, there are no authoring tools you go into your github editor and you create your mark down and you deal with. debugging problems and such what I want to do the next version is have an interactive editor that provides preview capabilities and better. They are reporting and all that sort of thing search engine optimization is not great, everything is done client side
and. And that's tough for Google when you're dynamically loading your content and so that that's problems and needs to do something that does more server side, rendering like to really. take better advantage of triple if I think we were doing a decent job like to do better accessibility is not great. That that to pain viewer approach doesn't translate great to mobile phones need to think more about how we're going to. provide an easier flow back and forth between desktop and mobile So there are
a number of things we need to do. What I would like to do is you know kind of this laundry list of things on this next page, some of which has already been done um it's a matter of fact, I think, maybe as a as a last thing in my last three minutes or whatever. Of too late to do the breath thing now. it's a full screen. I have a real quick. So I have that this this editor that sort of gets at this idea of this web based authoring environment previewing and drag and drop So if you find a tripwire
image, you want to use, you can drag the manifest URL and into the box and it will pre the tag. For many sites that don't provide triple if images explicitly if they provide an API many do i'm you're talking about wikimedia commons flicker. J store Community collections they don't provide triple if but they provide the image and they provide an API so I can build a triple if manifest suddenly drag and drop it you kind of goes and grabs the API data creates the manifest and such and. Maybe my la
st 30 seconds i'll show you what that looks like so I got a. DEMO so here. here's a wikimedia commons page we describe something random here. This one I grab this over. If you can create a header since I don't have it already at people when they go over like another one. So creates this this tag and then the reference to the image and then, when I push the preview behind the scenes is creating the manifest and putting the the viewer in the preview. So I hold my breath at this point because I hav
en't actually tried these images, so this one worked. This other one might might take a bit longer there we go so POPs into place and we've got the other thing that does improvement over the first gen is that. When we print image in the header we also have a way to attribute that to I can click on this and you get the licensing and attribution information before we didn't do that wasn't wasn't very responsible image use where I think we're doing better. And same thing. Here you get all the bells
and whistles you can also annotate this from that you can go here, and you can create your annotations and then they show up we do that real quick I apologize for running out Thomas can do real good. on installing here, but we create that so we've got an annotation on this image, if I go back and if I refresh this guy should show up. Yes, we got the reference here that we have an annotation I can do that, and I can also go here and we can show man. that's kind of cool. So anyway yeah i'd love t
o talk about that more but we ran out of time, so anyway thanks I appreciate the opportunity to talk and and I encourage you to try it, and if you go to editor that visual usa.org. requires you to create an identity, you can use a Google identity this ballgame with Google, where you can do a simple username password and in try creating the best so. If you do there's a link at the bottom, with an email love to hear what your reaction is and which I think about it. Right. impressions. You know it
wasn't the timings. What, what is the state of. completion of this for you know if I brought it back to a faculty Member and said hey you can use this tool like what the. um it's it's it's funny I mean it's a beta version of toby, the idea is there and then we're fixing bugs as we find them is improving. I think, to some degree we're going to probably freeze on version one with what you see there and functionality wise, as I think, as you move to this next generation. it's gonna be easier to mak
e improvements with need without trying to maintain backwards compatibility, so I suspect most all said and done, we're going to have a. version one that's pretty much what we have now and that that's gonna live forever means it's open source and so you can fork it and you know you can you can own it. Or you can wait for another few months and we have version two and there was a lot. Better usability features in terms of all everything. yeah i'm. i'm wondering for the accessibility roadmap remed
iation stuff and this might be too tricky so say so um. But i'm wondering if you have strategies you're already considering just from experience i've been struggling with. How to handle lots of dense pair attacks in a way that doesn't bombard screen readers right and that you can so i'm just wondering preliminary do you have any strategies i've been told that you know, this is a little bit of a challenge. Things like oh tags on the AAA images that's something that you like to work with a little
bit, but, but the other issues yeah, so we need to have a comprehensive but accessibility audit. approval, we can but fundamentally you know it is a challenge gotcha fair. Thank you RON Thank you. Right up next we have Laura l'oreal and Sean gills dorf talking about from manuscript to transcription and back again. hi everyone, my name is Laura moriarty i'm here with john gills dorf and also with Sarah Palin who is works at the library. I want to thank thank the organizers of today's triple I hav
e conference, very much for allowing us to present our rather. An orthodox paper at this meeting of triple if active scholars will be talking about from manuscripts transcription and back again and originally We said we were going to be closing the virtuous circle, with the holden msl a. lat five, but in fact we have not quite yet done that yet, so this is really a report from the field. it's not a polished presentation and it's not a finished product but it's really a slightly messy use case in
media stress. A study of a triple if project in action undertaken really with the best of intentions and, if I might say so rather entrepreneurial spirit, so thanks for hanging with us today. Now, what we want to share with you is our experience of using travel I have materials to make something. In this case, it was a collaboratively created edition of a test found in a medieval manuscript. In a project that involves students in Sean gills dwarfs spring 2022 Latin Paleo review class here at Ha
rvard, along with other outside participants who are interested in just joining in on the effort in the transcription. Now, when you decide to make something, whatever that thing might be you always need to gather get together your supplies, you need to ready your workspace and supply a pathway to completion and approach. That we took with with our collaborative edition project so today we'll be walking you through those steps and revealing some of our motivations and actually some of our misste
ps along the way. So what we'd like to mention, first of all is that all the materials that we chose for our class or class project or local. That is, they were all based here at Harvard as a part of the universities knowledge ecosystem. The manuscript we chose to transcribe and edited edited is just a few few buildings away at the library. And it's catalog, as I said, is Ms lot five and it's delivered online via the library's triple if delivery system. Since the manuscript is housed at home, we
were able to access years of accumulated knowledge captive by curators concerning the physical attributes of the manuscript itself, which would in turn help us as we undertook our project. Sean and I as medievalists and as digital humanities practitioners brought our own approach to the task at hand, including a template for our transcription.

Comments