Digital humanities to the rescue on reductive assessment?

The DH & Assessment session Saturday afternoon started with the usual mini-rants about our regional accreditor and reductionist assessment and turned into an “oh, wow, here’s a tool for this” discussion. The core of the discussion revolved around the open-source <emma> assessment tool built for the University of Georgia’s first-year composition class. (Links: <emma> front-end, which will be frustrating because it’s just the sign-in for UGA students, and the website of the Calliope Initiative, the non-profit continuing development and the business end for other institutions.)

At the lunchtime Dork Shorts, Robin Wharton had demonstrated the gist of <emma>: students submit papers in Open Document Format. Then instructors and peers can comment on specific passages and code their comments by area of the comment (e.g., thesis development might be coded as green, something else as yellow, etc.). Students’ revisions are linked to their original documents, they declare when a revision is the final document, etc. So far, this looks like a useful, user-friendly way to comment on student work.

In the afternoon session, it became clear that <emma> was also being used for institutional assessment–it has the ability to look at the comments and the comment categories, a sample can be drawn for assessment by a set of readers, with disagreement on basic judgments by two readers kicked to a third reader or other moderation process, etc. And the system has the capacity to allow conclusions such as shifts in comment categories (i.e., student skill development) across a course or a longer span of time. In other words, institution-level judgments based on the day-to-day evaluative culture within composition instruction.

Those at the session had the obvious questions about the system (expensive? it was developed by one person in the English department who taught himself programming, along with two graduate students) and then we started talking about what would be necessary to develop parallel systems for performances (e.g., faculty-juried music performances at the end of the semester). So we gabbed a bit about Pear Note, Transana, and some other options. And then we discovered that as ODF documents, the base documents students submit can include media. Hmmn…

Bottom line for me: Huge thanks to Rob Balthazor and his team at UGA for showing how digital humanities can put assessment on a much less shaky footing.

Sexy Good Web design

Here’s my funny poster & paper, as well as a summary of tweets from my phenomenally fabulous session: adellef.co/270

Game session notes

Game session notes (editable version). Static below:

Jane McGonigal, Reality is Broken — game definition:

  • Goal
  • Rules
  • Feedback
  • Voluntary entry

Read the rest of this entry »

Group edited notes from Messy DH session

Here ya go, folks!

Also, breakout session for generating blog post topics relating to these messes at 2:30 in RM756.

The document from the breakout session.

Notes from Digital Humanities in Higher Education session

Notes from this session (not group-edited) are on Google Docs at bit.ly/fo4daF.

Digital Images – problem space

Notes from the session are below in the comments OR on Google Docs at bit.ly/eZrbqH

I am struggling with the problem space around how best to provide digital images for teaching and research across a large campus and multiple disciplines.  How to get one’s head around issues of

  • usability (discover and presentation)
  • ingest/cataloging,
  • preservation, and
  • rights management

I love the cool “technology ecosystem” graphic that shows Omeka falling at a crossroads of Web Content Management, Collections Management, and Archival Digital Collections Systems, and would like to know more about how this might work with more academic focused products, like ARTstor SharedShelf or Luna Insight .

One a similar but different note, the draft ACRL/IRIG Visual Literacy Competency Standards for Higher Education says a visually literate student…

  • identifies a variety of image sources, materials, and types
  • conducts effective image searches
  • situates an image in its cultural, social, and historical contexts
  • evaluates the effectiveness and reliability of images as visual communications
  • uses technology effectively to work with images
  • produces images for a range of projects and scholarly uses
  • understands many of the ethical, legal, social, and economic issues surrounding images and visual media

Are we ourselves visually literate?  Are the DH tools and projects that we are creating promoting these skills in our users?

Tags: , ,

Envisioning librarian-scholar collaborations in the semantic age

As a metadata librarian, I’m always interested in learning new ways to not only attract new digital repository content but to increase efficiency in adding descriptive metadata to that content.  Collaborations between digital repository librarians and digital humanities scholars can support both of these aims as well as provide benefits for scholars.  By storing the products of digital humanities projects (ex., digitized primary sources, born-digital media) in the repository, librarians can make this content accessible to broader audiences and can tap into scholars’ subject domain expertise to provide valuable descriptive metadata at little cost to cash-strapped libraries.  In return, scholars get free, permanent storage for the digital assets that support their projects and guidance from librarians on digital project planning and using standards and best practices to manage their metadata.

Often, the metadata that scholars care about extends beyond the bibliographic metadata traditionally collected in library catalogs and digital library collections.  To attract scholars to digital humanities collaborations, libraries need to be able to store and make accessible this domain-specific metadata.  As we move towards storing and publishing metadata in RDF, we will soon have the flexibility to accomodate these new metadata demands.

Preparing librarians to work with this new data structure is one major obstacle we’ll have to overcome, but I’m interested in having a conversation about what skills will be valuable to librarian-scholar collaborations as we enter the semantic age?  How do we start incorporating ontology into our project designs? (Is ontology even on humanities scholars’ radar? It certainly isn’t much more that a blip yet in the library world)  How do humanities scholars currently map their knowledge domains?  Are there any shared data models or standards in the digital humanities that would help guide development of new best practices?  What roles should librarians play in helping scholars apply ontology to digital projects?  (And should librarians even play a role in this?  Do we even have the chops to become knowledge management consultants?) What tools would be helpful in facilitating these collaborations?  Are there existing tools we could build on?

I share cartera’s “big digital pile” view in that I have little sense of how scholars use our digital resources and what more they want out of them beyond simple search and discovery.  I don’t have any strong opinions or answers yet to this big pile of questions–I’m hoping to gauge interest and experience within both library and humanities communities so I can learn how to better frame the issue.

Alternate Parking for those staying at Conference Center or Inn

I called today and was told I could park at the Inn (around 8 AM) Friday, come in, confirm my reservation and get a parking pass, take the shuttle and then “check-in” later.

This may be useful to others. Hope all have safe travels and see you tomorrow!

G

Visual representation of information

I find that I am fascinated by the visual representation of information, along the lines of Stanford’s Republic of Letters (republicofletters.stanford.edu/).  I would be interested in a discussion of what specific explorations and findings have arisen from such projects.  How have these quantitative displays led to new thoughts on qualitative aspects of the material?  Do some methods produce better results than others?  How are we seeing this play out across the landscape of digital humanities, and to what innovative avenues of research are these discoveries leading?

THATCamp Meetups

THATCamp is almost here, and BootCamp starts tomorrow. We know that you’re all coming to work, learn, and get your hands dirty working on your projects. But a THATCamp isn’t all just blood, sweat, and tears: that’s why we’re happy to announce our meetups for the weekend.

On Friday night, we’ll be congregating at Manuel’s Tavern in the Atlanta’s Poncey-Highlands neighborhood, only a short drive from Emory. Come for a bite and a drink, whether you’re just arriving prior to the Camp or if you’ve been programming all day at BootCamp. The address is 602 N Highland Avenue Northeast, Atlanta, GA 30307-1433. We’ll start arriving around 6pm. It’s a “seat yourself” kind of place, so look for us in the big room to the left when you come in the front door.

By Saturday night, we’ll have fed our brains but need something for our stomachs. Accordingly, we’ll head to Taqueria del Sol in Decatur Square, again a short drive. This is a casual place, and we’re not going to call ahead and tell them 100 hungry THATCampers are on their way. Instead, we’ll just invite people to come in groups. It’s a casual dining place, so stand in line, make conversation, and enjoy some killer tacos. We’re told they sell margaritas and a range of tequilas as well. The address is 359 West Ponce De Leon Avenue, Decatur, GA 30030-2442. If for some reason you don’t want tacos, there are lots of other amazing options only a short walk away from Taqueria del Sol.

See you soon!

Skip to toolbar