Hot off the email: Archives 2.0 – Saving the Past, Anticipating the Future: Save the Date! National Media Museum, Bradford, United Kingdom 25 & 26 November 2014 Later this year, the National Media Museum will be hosting a two-day conference on the strategic acquisition and management of archives by cultural institutions.This event will bring together a number of key practitioners… Read more →
This missed the cut in the edit of a book chapter for an anthology the project team is working on. Thought it ought to appear somewhere as vaguely useful to what this project has done. The Circus Oz Living Archive research team have had innumerable, and significant, discussions about content and curation — the very stuff of the archive —… Read more →
The 2013 program (with abstracts) for Museums and the Web .
It only accesses the public data in the project, but now others can write interfaces to the stuff we’ve made and do, well, interesting things with it. An open API is, for me, fundamental to how we can conceptualise a ‘living’ archive.
It is a methodological field, not an algorithmic process (for those that recognise it, this is from Barthes’ “From Work to Text” as a nod to the importance of grand theory to my own work) which means that it lies between the sorts of new practices that emerge when we apply novel digital techniques to things that can be treated as data using what I’d like to think of as more traditional propositions.
…Finally the more recent decline might reflect the move to a completely digital mode where, to begin with, it was trivial to record a lot (and we all did) but then storage of non–tape media (we could call it non physical but that isn’t really accurate) on hard drives was informal and may have been stored willy nilly then erased, lost, forgotten, deleted and otherwise and indirectly treated as emphemeral.
Fascinating presentation by David Pearson, Director of Culture, Heritage and Libraries for the City of London Corporation about how marginalia shifts something that is more or less anonymous (my words) into an artefact of value. (Reminds me of an old project by computer scientist and ethnographer Cathy Marshall where she made a prototype annotation program for a laptop, her method involved buying heavily annotated second hand editions of text books at university book stores, and interviewing their purchasers, to model existing annotation practices.
Testing the Facebook like button….
Joining of Sequences 5.1 If the source media has been split into multiple files by hardware or software the most effective way to deal with such files is to: compress them using this protocol individually (things work faster when dealing with smaller files) use QuickTime 10.x to do a simple ‘butt edit’ by open the first clip locate the next clip in the finder drag the second clip into the first (which is open in QuickTime Player) it will drop in where you place it repeat click ‘done’ when finished and you will be prompted to save it name it, and when saved it will be ready for upload THE SETTINGS Description: 640 px frame controls on 1000kbs max resizing File Extension: mov Email notification to: Time remapping: source frames play at 24.000 fps Audio Encoder AAC, Mono, 24.000 kHz Video Encoder Width and Height: Up to 640 x 360 Pixel aspect ratio: Default Crop: None Padding: None Frame rate: 24 Frame Controls On: Retiming: (Fast) Nearest Frame Resize Filter: Linear Filter Deinterlace Filter: Better (Motion Adaptive) Adaptive Details: Off Antialias: 0 Detail Level: 0 Field Output: Progressive Codec Type: H.264 Multi-pass: On, frame reorder: On Pixel depth: 24 Spatial quality: 75 Min. Spatial quality: 25 Temporal quality: 50 Min. temporal quality: 25 Average data rate: 1.024 (Mbps) Fast Start: on Compressed header requires QuickTime 3 Minimum Watermark Position: Lower Right–Title Safe Scale By: 1.000 Alpha: 0.500 Repeat On File Name: Beatrice:Users:amiles:Movies:00 oz tests:watermark.png (These settings cannot simply be replicated outside of Compressor as a) some of the terminology is specific to Compressor, b) other software lacks this particular settings ability to scale to a maximum dimension using source aspect ratio.)
These are my links for September 26th through November 6th: No such pipe, or this pipe has been deleted – This data comes from pipes.yahoo.com but the Pipe does not exist or has been deleted. New Humanities | New Humanities – new web site come project called "new humanities" out of Rome. Paper Machines – Zotero extension to visualise your… Read more →
Key points: they are very interested in ways that what they create might offer data and services back to the originating content providers (e.g. a widget that connects biographical data with a performer’s name in the living archive) – so they wanted to be sure that people could be disambiguated in our database
…they are building the vocabularies etc, though I did learn about object name thesauri which was pretty cool in a deeply metadata-geek way As best I could tell they want to come knocking on our API around about May 2013… But things to do: confirm that people have unique identities in the database can we provide access via linked data/RDF (how and/or who?)