Joseph’s Annunciation (Mt. 1:18-25)

I read this morning the Greek of Matthew 1:18-25, and I have to admit it was some of the easiest Greek I’ve ever read, even though I don’t think I’ve read this story before in Greek. Why? Because the story is so familiar. I found myself translating in King James’ English, since I’ve heard this story so many times. Often I tell my students that one of the biggest challenges to reading the Bible critically is to “forget” the stories that we’ve heard in church contexts so many times and let the text be foreign once again, as it would have been to those first-century audiences who had never heard the story before.

But this morning I realize that’s perhaps not the best hermeneutical approach. For the last few months, I’ve been reading the patriarchal narratives of Genesis in a Hebrew reading group I’m in. That background gives me a different understanding of this announcement of Jesus’ coming birth to Joseph. What I realized, having read so many announcement and etymology stories in Genesis, is that this would have been a very familiar story to that hypothetical first-century audience I’ve been asking my students to imagine. That is, this announcement of Mary’s divine impregnation, prediction of Jesus’ salvation of people, and instruction for his naming, is absolutely ordinary in the context of the narratives of the Hebrew Bible. Nothing stands out; nothing.

So, I’m starting to think that the feeling of familiarity should stay with us as we read this narrative. Sure the predictions for Jesus are big. Sure, there’s a certain anticipation or excitement for what this particular child will do. But the narrator has led us into familiarity. As I noted with the genealogy, the genre of literature here is nothing new. But we recognize that there is going to be something new about this story. The challenge for us as readers is to allow ourselves to be led down this path of the mundane and familiar, so that the narrator can flip the script on us. So, rather than “pretending” I’ve never heard this story before this Christmas season, I’m going to remember and read it in that same King James English, expressly so that it will be familiar. Then, as my guard is down, I expect to have my eyes opened anew to the advent of Christ. The familiarity with the story is a gift, as it lets us experience anew this same pattern of announcement and expectation, unaware of where and how that experience will come.

The Real Genesis (Gn. 1:1-25)

This morning I read the first 25 verses of Genesis, the account of the creation up until the creation of the “man” (ADAM) from the “ground” (ADAMAH; btw, I’m going to be really lazy with my Hebrew transliteration). Like the Matthew genealogy, this is a pretty routine passage that I’d normally skip over rather quickly. Also like the Matthew passage, it has some oddities that make me sort of chuckle. After all, how can you have day/night (v. 5) when you haven’t even had the creation of the sun and moon (v. 16). Silly Bible! But what struck me on this reading was that that silly aspects of it come from the imposition of our order on the creation narrative. That is, the narrative is told in such a way that it makes sense to a human reader who is using to the work of creation. Even the verbs used, like “Make” and “See” and “Declaring Good” are the types of verbs we’d use to describe a bench we’re making or a field we’re plowing. With this language, the narrative of creation seems rather mundane. I liken it to other “creation” narratives in the Bible, like the creation of the tabernacle or the re-building of the walls of Jerusalem. There is a particular order of steps that are followed, and each builds on the last. So, you have light/darkness created so that you can have day/night. You have the firmament put in place to separate the waters so that you can have dry land and gatherings of water, so that you can create beasts to fill the waters and beasts to fill the earth.

But back to those oddities that make me laugh in the narrative. These break up this ordering. They make me pause in the narrative and realize that my normal structure to this creation doesn’t make sense. They’re like those little tears in the Matrix that give you some sense that what you’re seeing is not really the reality. Like Origen’s “stumbling blocks” in the gospels, the oddness of the creation account gives me some sense that I’m not getting the full picture, just one take on what may have happened. People tend to get squeamish in seminary world when we talk about the creation narrative, science, etc. But the reality is, why would we want a creation narrative that literally recounts how everything got made, in 6 days no less. Surely the process was a bit more complicated than 25 verses can capture. So I tend to appreciate this narrative for what it is: a human’s imagination of a divine act. This is why we put it in stages. This is why we personify the divine as satisfied with a good day’s work. But this is also why we have a “dome” that separates heaven from earth. This is also why we have irregularities that don’t add up. Of course they don’t add up, because what we’re doing is translating a divine act into human language. That’s going to “fail” in some sense every time. And thank goodness that is does. A divine act that can be sufficiently captured by human language is not all that divine, methinks.

So I’m going to enjoy continuing to read this human narrative of creation, told with language that is so human and workmanlike. I am going to enjoy it not because it completely captures what happened in those 6 “days,” but rather because it attempts to give me some sense, in language I can understand, and imperfectly at best, the majesty of what has been created. God made it all. That, to me, seems to be what the author’s getting at. It’s up to me to realize that recognizing that everything was made by that same divine figure, in whatever manner it was made, has some serious implications for how I treat other things and beings.

The Genesis (Mt. 1:1-17)

This morning I read the genealogy in Matthew (Mt. 1:1-17), a passage we all normally skip over. After all, it is a bunch of names begetting names. Easy Greek, boring story. In slowing down, though, I did notice a few things that I never really stopped to think about before. First, this is not a genealogy of Jesus, and that’s a bit surprising. Of course we all recognize that this is the genealogy from Abraham through Joseph. I was struck, however, by the fact that the author doesn’t really clue you into that until you reach the end. That is,   the book begins as standard books do, with the “book of the history of Jesus Christ, son of David, son of Abraham.” The genealogy proceeds, then, to recount the progression toward Joseph, but then the script is flipped on you. In v. 16 we learn that Joseph is “the husband of Mary, from whom Jesus, the one called anointed, was begotten.” After all these active verbs “X begot Y,” we get our first passive verb, and it’s Jesus being born of Mary. There is always that point in middle school Sunday school where each of us determined we were smarter than the teacher and noted that this genealogy leads to Joseph, who, according to the later story, isn’t even a blood relative of Jesus. We think this is odd. But reading carefully, it seems the author is right there with us. He’s noting that this story is not like the toledot of the OT (see Genesis 36, e.g.) that recounts how we got from one man (in that case Esau) down to other figures of history. Very early on Matthew is letting us know we should expect something different, one whose story looks a lot like the stories we know, but ultimately is quite different. Imagine if you’re hearing this for the first time and those toledot legends are the stories you have grown up on. Matthew seems to start in on yet another. Just like Abraham, Jacob, Esau, etc., this Jesus is another figure. As Matthew is going to show us again and again, though, this one is different. The opening chapter subtly  introduces this motif that he’ll trade on throughout the narrative.

A second thing that sticks with me in this reading, though, is the marking of the story of Israel with the “deportation to Babylon” (vv. 11, 17). The word literally means something like “change of homes.” We all know historically and sociologically what a significant event this was for the development of the nation of Israel. And we have all thought about how odd a marker this is for the genealogy (like the inclusion of Tamar, Rahab, and Ruth). However, what strikes me here is the language of “changing homes.” This writer seems to note how important stepping outside of the physical geography of Israel’s story is for what has led to his story. That is, the story took a circuitous root through Babylon to get where it is today, and that route is worth noting, as significant a marker as Abraham and David (v. 17).

So it’s clear from the very opening of this chapter that this is a story of continuity and discontinuity. Matthew’s tying this figure of Jesus to the story of Israel we all know, but he’s doing so in an odd way. Perhaps he’s suggesting that this story we’re about to hear is going to make us rethink that story of Israel. We’ll start to recognize why events like the “changing of houses” are so significant. We’ll start to think about why the figures like Tamar, Rahab, and Ruth are significant. We’ll start to think about why blood relations matter, and yet why they do not. We skip over this chapter often because it strikes us as a mundane beginning to this story. This is the background the author feels he needs to include just to set some context for Jesus. As we slow down, though, we realize that this passage is not simply setting a historical context for Jesus, but rather introducing us to the hermeneutical shift that Matthew believes Jesus causes.

Presenting My Work on Biblical Commentaries at The University of Alabama Digitorium

I spent this past weekend at the inaugural Digitorium, a digital humanities conference sponsored by the University of Alabama Digital Humanities Center. It was a great experience, and I hope to attend again next year. The program was a mix of pedagogical and research applications of digital tools to the study of humanities. The conference was a wonderful experience because I got to see all of the interesting things that people are doing. It’s hard to walk out of a conference like that and not want to run back home and do more with tools like Omeka and start brainstorming about how digital tools can do more than make our research and teaching more efficient, but create entire new subfields of research.

For my own part, I participated in a session entitled “Teaching and Analyzing Writing Digitally.” The first two presentations, from Jeanne Law Bohannon at Kennesaw State University and Geoffrey Emerson of the University of Alabama focused on creative uses of digital tools in the classroom. Jeanne talked about her use of Twitter to engage her students in her literature and rhetoric courses; a great example of empowering students to own their learning. Geoffrey explained his use of journaling and timelines in his literature courses to give students a forum to reflect on and connect with their readings. My part of the session was focused on research. I gave a paper entitled “Pick Up and Read: The Methods, Tools, and Promise of Distant Reading Biblical Commentaries.” You can check out the slides I used here. In this presentation I took a hands-on approach for a distant reading project I’ve been working on. Seeing the genre of Biblical commentary as a structured data set (verse by verse comments means that all commentaries have a basic structure that is consistent), I showed how I am using software tools to parse and compile statistics on a given commentary’s treatment of a given passage. So, for example, we can use software to ask questions like “Which verse of 1 Corinthians gets the most attention in commentaries?” or “How has the treatment of 1 Cor 11:24 changed over time?” or “What is the verse in 1 Corinthians that gets the most attention in Calvin’s commentary?” I tried to show the participants the significant of a project like this, but more importantly, I tried to show them how to design a project like this. We got a bit in the weeds with things like XML, Python, and MySQL, but I thought it was important to show people that with a few basic skills we can all learn to “read” in new and interesting ways. The potential for distant reading is tremendous (more on this another day), but we need people to learn to write software in order to see it adopted on a wide scale.

The conference reinforced some of my previous thoughts about one particular danger inherent in the current state of the digital humanities, and my paper attempted to mitigate against that. By “danger” I mean the divide between the “haves” and the “have nots” in the digital realm of university research. I heard presentations from some fascinating research projects. Most of the most amazing though, were the products of collaboration between subject matter experts and large centers for digital humanities, well-funded resources at major Universities. This is a great situation, as it allows the tech-people to do what they do best, coordinating with the researchers who are allowed to do what they do best. One particular example of this was the Spenser Archive, an online version of Oxford University Press’ forthcoming collected works of Edmund Spenser, developed in coordinating with Washington University in St. Louis. Professor David Lee Miller of the University of South Carolina did a marvelous job of introducing this (and several other) digital projects he’s working on, showing the amazing future of critical editions of important works. However, this project (which again, I stress, is amazing and exciting) was done in coordination with DH centers across multiple universities, with incredibly well-funded teams of researchers. They have programmers and computer scientists working on search algorithms, interfaces, and text mining tools. This can all be a little overwhelming to anyone who does not have access to such resources. How can one wade slowly into the digital humanities? The world seems to be split between those doing full bore digital projects, and those who depend upon existing tools matched with creative pedagogy to alter the way we have always done thing. Professor Miller made the excellent point that there has to be leadership from the senior level if we want digital humanities to go mainstream. He stressed that those who can afford it (and by “afford” he means primarily those who have the security of tenure) must show how real scholarship can be done with digital tools and in online fora. This will pave the way for giving credit to junior scholars who work in the digital humanities. My concern, though, is that the way being paved by these senior scholars may be one that is unattainable to many junior scholars; how can they replicate the level of incredible work Prof. Miller and his team are doing if there is no access to the types of digital humanities centers that he runs?

I would argue that this replication can come by the junior scholars who are interested in the digital humanities learning to write software themselves, rather than having to rely on centers with programmers. What I find exciting, and what I was trying to show in my presentation, is that we’re getting to a point where the perceived divide in digital humanities between the “haves” and the “have nots” should start to go away due to the availability and accessibility of programming tools. Software development tools are so accessible and easy to learn, even for those working in the humanities, that individual researchers can do impressive digital work with very basic knowledge. There is still a barrier to entry, though, and I was trying to help some get over that barrier. Basic programming and database skills are no longer “nice to have” on a resume; they are going to be essential parts of a researcher’s toolkits. The challenge is that we don’t know how they’re going to help us until we have a lot of researchers out there experimenting with them. If the world depends upon the resources and know-how of large digital humanities centers, with their staffs of programmers and data miners, then I’m afraid the digital humanities “revolution” is going to continue to be a slow-moving one. However, if subject matter experts themselves learn development skills, then they can more efficiently determine how digital tools may allow us to do things we’ve never imagined with texts.

So, I don’t want to knock those Universities that have these amazing centers that allow collaborative projects to move forward (heck, I work at one such University). However, I do want to invite individual researchers to learn programming tools that allow for text manipulation and adaptations like distant reading. We won’t know what’s possible until people start experimenting, and people can’t start experimenting until they learn to write code! So, to sum it all up, learn Python!

Defining Technology

How do you define the ubiquitous term “technology”? At first glance, the task seems easy. We are surrounded by “technology” in our modern lives. Surely the term means something like the sum of the many gadgets and tools that are sitting in plain view (for me, that’s a TV, an iPad, an iPhone, a Macbook Pro, a watch, the lights, etc.). Surely that’s what we mean by technology, right? But if you look around, you’ll start to notice that the term is used with a great deal of vagueness and ambiguity. It is used to refer to the products of our creative process, that creative process itself, or the study of that creative process.

This ambiguity of the term is nothing new in English. I’m re-reading David E. Nye’s excellent book Technology Matters  in preparation for a Spring class I’m teaching, and he reminds us that the term has never been so straightforward as we might initially think. The English word “technology” is relatively rare in the early to mid 19th century, and it certainly was not used to mean “tools” or “inventions.” The term was primarily used to refer to the systemization of a particular field. As an example, Nye cites books written as “technologies” of glassmaking, detailing the entire art. He notes that even with the Industrial Revolution raging in the United States, “technology” in the 19th century did not refer to particular tools made by humans. He quotes Leo Marx: “The word technology primarily referred to a kind of book; except for a few lexical pioneers, it was not until the turn of the century that sophistical writers like Thorstein Veblen began to use the word to mean the mechanic arts collectively” (12).

Nye ties the modern understanding of “technology” as tools that were created by humanity to the German term technik, originally translated as “technics” in English. In the 19th century, this word meant “the totality of tools, machines, systems, and processed used in the practical arts and engineering” (12). It was over a period of time, really leading into the early 20th century, that “technology” came be equated with the products of development. For me, the history of ambiguity about the term is a helpful reminder not to settle so quickly on what I understand by the term.

I try to push myself to use a definition of technology that draws upon Martin Heidegger’s “Question Concerning Technology” (Die Frage Nach Der Technik). Heidegger speaks of technology not in terms of the tools that we use, but the self-understanding of the humans who create and use such tools. That is, the tools we make/use are only the lens through which we can see the actual technology, which for Heidegger is the self-understanding our inventions reflect. Heidegger is not interested in tools as technology, but in “what we are becoming with our technology.” As he pithily states it, “the essence of modern technology is nothing technological.” I find Richard Rojcewicz’s book The Gods and Technology a very helpful guide to what is an amazingly complex and opaque argument from Heidegger. Rojcewicz summarizes Heidegger’s understanding of the essence of technology as “prior to technological things–not only logically, as the condition of possibility, but even temporally or historically.” Heidegger uses the image of the midwife to talk about technology and to make his crucial distinction between ancient and modern technology. Ancient technology is the mindset of humans as a co-participant with nature in the creative process. The example often cited is the sculptor who helps bring forth a creation from the stone. As Michelangelo is quoted as saying, the sculpture is in the rock, he just chisels away to reveal it. For Heidegger, the technology of most of human history has been this way. Humans create by working with nature. A distinction has to be made with modern technology, though (the distinction between ancient and modern for Heidegger should not be understood chronologically, though modern times make the distinction clearer; both ancient and modern technology exists today). Heidegger is concerned with technology as a system where humans are no longer working as a midwife alongside nature to create. Rather, humans now see themselves over nature. Humans are now the exploiters of nature, free to do what they want with what they find. Now, we can debate all day about this distinction or where the line is drawn. But the key point is that the tools that we use are not the technology themselves. Rather, the tools we use point to our self-understanding vis-a-vis nature and other humans.

This, by the way, is why the most recent advances in technology become so interesting and complex (I’m currently writing an essay on this, so only a brief allusion here). In the world of transhumanism, where technology allows us to extend life, upload consciousness, etc., the exploitation involved in invention is no longer of nature extra nos, but of human nature. This has all sorts of implications for how we relate to one another and how we determine who is worthy of exploitation and who is not. The point, though, is that the technology is not just what we create, but what the thing we create says about how we understand our position in the world (the essay, btw, is about how we judge a concept like the imago dei in light of what those like Ray Kurzweil think we can do really soon; stay tuned).

So, as I prepare to help my students (on Tuesday) push beyond our common instrumentalist notions of technology, I’m pressing myself to think about the tools I use as a reflection of who I am and what I want. So, for example, the technology of Twitter is cool, but what does it say about me that I feel some sort of validation that a tweet I send is retweeted or favorited? The ability to share my thoughts on this blog is a nice technological advance (thanks WordPress!), but what does it say about me that I will spend a good bit of time writing all of this, not knowing if anyone will ever read it?

So, students, as we move forward, I’ll be right there with you trying to find a more substantive/existential definition of technology. Thankfully, I’ve got a lot of guides to help me along the way.

In the meantime, I’m turning back to my codecademy lesson on Ruby on the Rails, so I can invent more technology (what does that say about how I understand my role in the world…)

Computational Linguistics, Digital Classics, and A Bunch of Stuff Over My Head…

I’m at the very beginning stages of a research project that’s got me pretty excited and pretty terrified. In speaking with some of my students who are beginning to read Greek, it occurs to me that the transition for many seminary students from reading the Greek of the New Testament to the Greek of the world in which the New Testament was formed can be a daunting one. There are many great New Testament Greek grammars, but the problem with them is just that: they are New Testament Greek grammars. The confidence gained by being able to read the Gospel of John is quickly dashed as soon as one picks up Philo, Plutarch, or Lucian, not to mention “real Greek” like Plato and Aristotle. For many students who have been through a couple of years of Greek, though, they want to make this transition but there is no clear way to do so. Most are forced to do what I did, which is struggle through texts that are way over one’s head, clinging tightly to the Liddell, Scott, Jones lexicon and any (often 19th century) English translation one can find.

My project is designed to help this proble. What I would like to help students do is identify which texts are closest to the NT in syntax and grammar, and allow them to begin with those, and then branch out to more and more difficult texts as they become more competent. Now, this might be an easy project if one were simply to interview people who work with the NT and Greek literature. Heck, I myself have some sense of which ones are the the place to start (Apostolic Fathers? Epictetus?). But I am using this idea to expand my abilities and step into the daunting world of computational linguistics. It seems to me that there should be a way in which one could analyze the syntax and grammar of the New Testament, and then create a system by which a given text could be compared to the NT as a standard. For example, if we could determine that the frequency of the use of the optative case in NT Greek is X, then we could show students texts where the use of the optative is similar. If one could determine enough idiosyncrasies of the Greek in the NT, then one could begin to search for Greek texts that demonstrate a similar set of idiosyncrasies. To do this, of course, would seem to be incredibly labor intensive, but computers are designed to do this exact type of repetitive work. If we could teach a computer to identify grammatical structures, then a computer could give us these stats rather quickly and over a large corpus of texts.

But how do you do that?

Fortunately, many people much smarter than I have been at this for a long time. There is an entire field called computational linguistics that has as its very purpose the description of natural language by computers. And though working with languages like English is much easier than working with highly-inflected languages like Greek, the super smart people have determined ways of describing language patterns in Greek. There is an entire sub-field called digital classics that works with these tools. You’ve probably encountered a few of their well-known tools: TLG and Perseus are good examples. Just like students learning to identify forms, high-powered computers have learned the “rules” of ancient languages, and therefore they are able to identify forms in texts just like a struggling Greek student. Rather than tag a text (in XML for example) with grammatical forms (as many projects do), a truly computational approach encodes the rules of the language and then allows the computer to apply those rules to whatever text you pass it. For my project, what’s exciting is that these tools exist (see the Morpheus engine from Perseus), they are available as open-source tools, and the corpus of digital texts is growing all the time. So now I’m proposing to take an engine like Morpheus, working with it to identify the idiosyncrasies of NT Greek (much written about, but this will be an interesting project in and of itself), and then start a system by which I can rank Greek texts using the NT as a baseline. How similar is a given text to the syntax and grammar (not to mention vocabulary) of the NT?

But I’m really doing this as an entree into this world of digital classics. I have a million more ideas about how we can use computers to tell us something about the language of early Christianity. But to do those, I’ve got to get started on this little project. The crazy thing is that there is a ton of support out there to help me. Not only are projects like these well-documented, but there is an enthusiastic and supportive user community there to help. So I’m going to get started with something that is likely well over my head. Yes I have training in classics and in computer science, but I’ve never really put those together. But as Luther would say, sin boldly. And so I will. This is the absolute beauty of the IT world. There are few barriers to entry. Got an idea? Got some sense of how to do it? Then just try. The worst thing that can happen is that you fail but learn something along the way.

Look back here for a lot more information about this project, as I’ll be journalling my project here. The first step is already complete: I downloaded the Morpheus source could and I’m working through it. Also, I encourage you to read this fantastic analysis of digital classics Rome Wasn’t Digitized in a Day: Building a Cyberinfrastructure for Digital Classics by Alison Babeu, the current director of the Perseus project. I learned an amazing amount from reading it.

Thoughts? Ideas? Help? Please send all those things along. I’m already sinning quite boldly.

In Praise of Open Source Software, or, what I learned last month

Let’s all take a second to celebrate the open-source software movement. As a long-time user of tools like Linux, I have always been a fan. What’s not to love? We all want software tools that are free and available to all. However, over the past month I have discovered a new appreciation for the open-source movement as a teaching tool. In combination with web development I’m doing in library school, I’ve learned the benefits of open-source software far exceeds the product itself; the benefit is in the process of taking the product apart.

Over the past month, I have implemented two open-source software solutions at the Pitts Theology Library. Both products have solved real needs in the library, working as well as commercially-available comparable products, and both cost absolutely nothing (other than my development time). They both, therefore, fit what I previously loved about open-source tools (free and available). What I have found most significant in working with these tools, though, is the learning opportunity that the complex implementation of each presented. Both have required significant modification to fit the library’s needs, and so with both I have had the opportunity to develop new skills during implementation. In working with these tools, I’ve learned that perhaps the best part of open-source software is the learning opportunity available with a tool that allows you to take it apart and put it back together.

The first tool is OpenRoom, room booking software developed by Ball State University Libraries. This application, developed using PHP and and MySQL, is a well-written (though a bit dated) application that allows users to book rooms for specific periods of time. The tool is falling out of support by Ball State, and I’m considering releasing my own version on GitHub (with the blessing of Ball State, of course). The second is Xibo, open-source digital signage software, also developed with PHP and MySQL. At my library, we are using both for important functions at the library. OpenRoom is the application we use to allow students to book small group study rooms in our library. Xibo is driving our digital display, a welcome screen at the front of the library which displays policies, events, and photos.

Not only have these tools filled much-needed functions in my new library (they have). Both have been great teaching tools and allowed me to see how much more “IT fluent” I have become. The beauty of open-source tools is, of course, that they are open source. That is, if i you want to do something, you can figure out a way to do it. This was perfect for our needs at the library, as neither fit our requirements “out of the box.” With a little bit of coding knowledge and a lot of intellectual curiosity, I was able to redevelop each tool to do what we needed it to do. So, for example, when I was underwhelmed by the built-in modules on Xibo, I was able to develop my own clock module, using our display sign to keep time in the library. Likewise, I modified the OpenRoom tool to use Emory’s Shibboleth, single sign on application, so users would not have to remember another username and password. In conjunction with some work I’m doing in library school, I’ve been able to develop new skills in JQuery, AJAX, and CSS, while sharpening previous abilities with SQL and server-side scripting. The products I’ve implemented help the library run more efficiently. But the process of the implementation has been quite the learning experience for me, and it will make me work much more efficiently in the future.

We talk a lot in the library world about “IT fluency.” Typically this phrase is used to indicate some level of gadget wizardry. That is, the IT fluent person is the person who knows how to do the most stuff with the most tools. You could say that through these projects and other work I’m doing in library school, I have become more “fluent” in the sense that I’m better at making asynchronous Javascript calls, or joining two tables in a database, or designing a page with style sheets. Working with open-source software, though, I’d argue I’ve become more “fluent” in the much more important sense of recognizing that you can make software do what you want it to do. No longer am I limited to what the commercially-available products allow me to do. Rather, these projects have given me reassurance that I have the skills (or I can find someone who does) to make software do what I need it to do. This reassurance has become clear to me in another recent implementation I have done, using a commercially-available, closed-source product. I’m not able to change the way the application works, and because I’ve had the experience of doing that, I realize how frustrating it can be. The real “IT Fluency” gained in my work over the last year is not skills, per se, but a recognition that with a little bit of know-how, I can make software do what I need it to do.

So, let me recommend these two tools to anyone working in library-type jobs. Both OpenRoom and Xibo are wonderful products (and again, they’re free). However, let me recommend them also because they allow someone to play, to work with well-written applications, see how they work, and then make the changes that one wants or needs. I get asked quite often how best to learn software development skills. There are great places to do this in a more formal way, like CodeAcademy or the Khan Academy. Personally, though, I’d recommend the “take something apart” model of learning, and open-source software provides a great opportunity. All of us nerds remember how effective this was when we took apart our electronic toys. We learned what a circuit board looked like or how a motor turned by ripping things apart (though they didn’t always go back together). With open-source software, we can do that, but in a far less destructive way. In the last month, I’ve become more “fluent” because I’ve become more emboldened to tear down and build back up (see, there’s some Biblical language).

Library Rankings and Assessments of Value

I have been reading William Deresiewicz’s new bestselling book Excellent Sheep (Free Press, 2014), which I highly recommend, and it has gotten me thinking about the way we assess the value of institutions. Early in the book, Deresiewicz tells the story of the gaining influence of the US News & World Report rankings in Higher Education. Despite strong protests from administrators at the time (the early to mid 1980s), the rankings quickly became the method by which institutions were judged and compared with one another. The situation has almost become laughable, as institutions work hard to advance in areas weighted heavily by these rankings, often “cooking the books” in certain categories to receive a higher ranking, which is often accompanied by higher application rates and a higher quality (in terms set by the US News & World Report rankings) student population, and the cycle perpetuations itself. One need look no further than the recent, embarrassing situation at my own institution to see how out of control this has gotten. These problems of the pressure to adhere to somewhat arbitrary assessment standards are not unique to higher education, a point well made in Cathy Davidson’s book Now You See It (Penguin, 2012). Davidson points out that the SAT and universal assessments in public education have had a similar effect, prompting institutions and individuals to teach and learn for the test. Improving achievement of somewhat-arbitrary standards becomes the primary motivation and guide in curriculum design and instruction, and institutions are often guilty of “cooking the books” to improve their performance in these assessment categories. Once again, the criticism of assessment-motivated-cheating hits a little close to home. As a society in general, and particularly as educators and administrators, we have become obsessed with assessment, seeking ways to improve our position in whatever rankings are held as authoritative for our field.

What does any of this have to do with me? (Good question). I have recently found myself considering assessment and rankings as I give tours of our new library facilities (Pitts Theology Library, new location opened last month). In guiding people around our new building, I am often asked where Pitts ranks in terms of the top theology libraries in the US, North America, or the world. And to be honest with you, I never have a very good answer, though I am beginning to become bolder in my claims about Pitts’ relative value. Frequently Pitts has been referred to as the second or third largest theological library in North America. This, it would seem, is a rather straightforward and objective way of comparing libraries. Count up all of the books, and the one with the most books wins. This is a very common way of comparing academic libraries. But I would argue this is not a very effective way of comparing libraries. Emphasis on the size of the collection would seem to create pressures toward un-curated collecting, and it would seem to favor those institutions with the highest acquisitions budgets. What is more, the digital age and globalization have ushered in an era of libraries focusing on specialization and collaboration. It makes very little sense for everyone to rush to acquire every book when borrowing a book outside of the scope of a given library is an ILL or consortial agreement request away. Furthermore, the move toward Open Access and digital libraries suggest that a criterion based on a model of individual institutional ownership might not be best way to judge a library’s value. You could make the argument that Google is the world’s largest library, not because they own the most books, but because they can provide access to the most books.

I would hope that we’re moving away from this judgment of value of a library based on holdings. Though the size of a collection is certainly important, it cannot be the sole criterion for assessing a given institution or comparing multiple institutions. This past summer, as our collection was moving from our previous location to the new building, I had the odd experience of serving as a librarian in a library that had no books. For a few weeks, our shelves were empty (though patrons had access to materials; it just took a few days!). And yet, even without books, I would argue that we continued to function as one of the top theological libraries in North America. How so? I could make this argument because we continued to support patron research. We did this because of our superior staff and our facilities. The research of those at Candler, at Emory, and across the world who came to us (virtually or in person) continued to thrive. And this, I would say, is the real measuring stick of any academic library. How effectively does the library support its patrons? It may do this by providing books (the traditional measure), but in our case we do this through research consultations, through digital resources, through instructional sessions, through quiet study spaces, etc. So, in the end, the real judgment of the value of a library is an indirect one. That is, my library is only as good as the research that we create and we support. This involves providing books, for sure. But it involves far more than this. The books are of little value if they are not properly catalogued, if patrons cannot check them out, if patrons don’t have spaces to read them, if patrons don’t have research libraries to consult with about reading them, etc.

So, I am going to continue, as I did today, to advertise that Pitts Theology Library is one of the top theological libraries in North America. I feel confident in making this assertion not because we have one of the largest collections of theological materials (though we do), but because we have the staff and the facilities to support work that draws upon that collection and all the collections around the world we have access to. What about you? How do you assess the value of your library? Deresiewicz and Davidson encourage us to be a bit more creative in our assessments of value (of institutions and students, respectively). I hope, likewise, we can be more thoughtful about our assessments of the value of individual libraries.

Shifting Paradigms of Knowledge, or a Defense of Wikipedia as a Pedagogical Tool

As librarians, it is almost second nature for us to devalue Wikipedia. Frequently we remind students that Wikipedia is not a tool for research. We harp on students not to cite Wikipedia in their academic work. We cite famous examples of Wikipedia’s inaccuracy.  We prop up the straw man of an undergraduate turning in a paper full of Wikipedia citations and the sea of red ink from the professor that awaits. All of this is well-intentioned and important, and I agree that Wikipedia should not be used in formal academic work. In my work in an academic library, I often warn students against the danger of relying on Wikipedia, and I stringently refuse to allow them to cite it in academic papers.

However, I’m concerned that we librarians (and many academics in general) are missing the point in our all out war against the crowdsourced encyclopedia. First of all, it should be recognized that we are likely overstating the case about the inaccuracy of Wikipedia. Countless studies have been conducted asking about the accuracy of Wikipedia, and for the most part, researchers have concluded that it’s far more accurate than our public decrial would suggest, with some suggesting that it’s just as accurate as other reference works. This is particularly the case for heavily-edited articles, where one sees the value of a lot of eyes on a small amount of content. That Wikipedia is not so inaccurate as we might suggest is even clearer when one considers the false standard to which we hold it. The rhetoric about Wikipedia as inferior to more traditional reference sources suggests that those traditional sources are themselves infallible, given that they are produced by the credentialed experts. However, anyone who has worked closely with reference materials in a particular field will admit how uneven and at times inaccurate even these print resources can be.

But it’s not the under-appreciated accuracy of the tool that suggests to me that we may be doing a disservice to students by dissuading them from considering it a tool of the academy. Rather, in leading people away from Wikipedia, we are shielding them from seeing the shifting paradigm of knowledge that a tool like Wikipedia represents. By understanding just how different the philosophical underpinnings of a crowdsourced encyclopedia is from our traditional reference sources, we miss a real opportunity to teach our students something about the construction of meaning.

I want to outline two reasons I believe Wikipedia can be an effective pedagogical tool. First, it crowdsources our knowledge base and allows the community to serve as a check on individuals’ arguments. The primary reason so many of us are against Wikipedia (and other tools like it) is that we are operating from a top-down understanding of data and information. That is, we understand information as something that is in the realm of the credentialed expert (there’s a bit of job security behind this idea, don’t you think?). In the traditional mind of the academy, reflected in the decrying of tools like Wikipedia, information is something that the rest of you must only passively accept, because you really don’t know what you’re talking about. Leave it to us, the people writing reference articles based on our long years of study to report what you need to know. For example, if I want to know who fought in the Spanish-American War, I should turn to a reference work written by someone with the appropriate degrees in 19th-century American history and ask him or her (though traditionally it has been “him”) what happened in that war. But who is to say that this particular historian that I chose knows what he is talking about? Or who is to say that he’s telling me what actually happened and not his idiosyncratic take on things? Where’s the check on his information? Perhaps it comes in reviews from other scholars, but those can be difficult to track down, difficult to follow, and sparse in their coverage. What if there were a place where the description of the war was under the scrutiny of everyone who knew something about that war? Wouldn’t we tend to trust the explanation of the war that had been vetted by millions, rather than one that had been vetted by one (or one and a team of editors), even if most of those millions didn’t have advanced degrees? Surely we are not there yet with Wikipedia, but you can see how important the democratization-of-knowledge model it is built upon can be. Just as open-source software has the benefit of a powerful user and editorial community, so open-source information like that found on Wikipedia has the power of millions of checks upon it. We are in an incunabulum period with this paradigm of knowledge, but it is certainly where we are heading. A roomful of opinions is going to be more accurate than a single opinion, so long as we can bring enough well-intended voices into the room to cancel out all the crazies. We’re not there yet with Wikipedia (hence the famous inaccuracies), but I think we’re headed in the right direction. The fact that the more heavily-read articles on Wikipedia tend to be the more accurate ones suggests that this crowd-sourced information world is the way to go. Wikipedia is not perfect, but I am concerned that in decrying it we are missing the boat on the paradigm shift that it represents.

A second benefit, closely related, is far more interesting to me, and it is one that I’m only beginning to appreciate. A tool like Wikipedia, because it is crowdsourced, reveals as problematic the traditional hierarchy of data, information, knowledge and wisdom. In the traditional view (and this is far more complex than I can represent here; check out the Wikipedia article on it!), there is objective data that can be gathered by the experts. Information is a set of inferences made based on that data, or the organization of that data into meaningful conclusions. Knowledge is the synthesis of information in a given context; the subjective interpretation of information in particular situations for particular purposes. Wisdom, then is the future-looking application of knowledge, an even more subjective rendering of the data, filtered through several subjective levels. In a traditional view, reference sources like encyclopedias are understood to function as data organized into information. That is, reference works present a somewhat objective set of data, organized and described. In traditional encyclopedias, this information was presented as objective fact, verified by the credentialed individual who was asked (and often paid) for his/her expert analysis. Because there was very little check on the information (essentially an editorial check by someone who was not as specialized the expert writing the article), the reference work presents an article as the objective information with which a reader can do whatever he or she wants. That is, knowledge and wisdom can be created from the information provided, given the context in which the reader lives and works or the particular purpose for which he/she reads. A crowdsourced encyclopedia, though, pulls back the curtain on this process and shows that there is no such distinction between the “objective” information and the subjective construction of knowledge and wisdom based on that information. Because the information is constructed socially, and because the process of that construction is transparent (we can see the editorial history and read the dialogue surrounding that history), we recognize that there is no “objective” data, regardless of the credentials of an expert providing it. Rather, all data and information is subjective, the product of the perspective and context of the data collector. In fact, if we operate under the DIKW hierarchy, we might ask whether the first few layers of the hierarchy are even possible. If the post modern turn has taught us anything, it has taught us to be suspect of truth claims made, particularly those made by individuals. The presence of a community, explicitly operating from a particular (and different!) contexts, shows this messy process, even when trying to report the most “objective” data. It is revealing, for example, that basic “facts” about the life and presidency of George W. Bush are so vigorously debated. This shows us that even trying to present “data” is a subjective act. So I welcome Wikipedia, because it does the hard work of showing us what many late 20th and early 21st century philosophers have been telling us: all is subjectivity.

So, am I going to start suggesting that people use Wikipedia in research? Well, no, but yes. That is, I stand by the informal librarians’ creed that Wikipedia is not a scholarly source. There is a distinction between the article on “St. Paul” on Wikipedia and in the Dictionary of Biblical Interpretation. I recognize that citing the Wikipedia article is going to be met with much red ink. So, don’t do it. However, I do think using Wikipedia as a pedagogical tool is important, and I’m concerned that our refrain of “don’t use Wikipedia” is costing our students a valuable opportunity to catch a clear glimpse of the shifting paradigms of knowledge that are reflected in such a tool. There are tangible benefits of using Wikipedia in the classroom. First, it gives students a forum to present their work and make real contributions. The ideals of Web 2.0 are noble ones; we are all creators and we all have something to say. This is difficult for students to recognize, as they spend much of their time working on artificial assignments that will rarely be seen by a set of eyes other than their professor’s. Using Wikipedia in the classroom can give students an opportunity to contribute, to be part of the world’s construction of knowledge. Second, though, is it shows students how meaning is created. It cautions against the far-too-common assumption that a) there is objective data and b) only the experts really know it. Though Wikipedia is not perfect in any way, it moves in the right direction. It is built on the post-modern understanding that all meaning is constructed. It allows us all, therefore, to serve as the arbiters of that meaning. I don’t feel qualified to be a sole arbiter on much beyond my narrow field of training (and even then I’d be a bit cautious). However, I do believe that alongside millions of others, I can come to a pretty fair consensus on what happened, what it meant, and why it’s important. So I encourage teachers to use Wikipedia. Don’t use it as a singular source of information (though again, it’s not a bad one). Rather, use it as a publishing platform and a teaching platform. Encourage students to edit and create entries, hold an edit-a-thon, make a student assignment the assessment of the history of a wikipedia article, or do anything else to get students to see how this paradigm is shifting. For goodness’ sake, at least read this prescient article by Roy Rosenweig (from 2006!) about the benefits of this tool.

The millions in the room will never agree on even the most mundane piece of data, but that’s the beauty of it. We can see the conversation as it happens and see before our eyes the truth of ideas like that of Paul Ricouer, who reminds us that no event has meaning until it is put into context with other events, and that process is a subjective one dependent upon the whims and desires of the one doing the contextualizing (not sure who Ricoeur was? Look him up on Wikipedia! [but check some of the sources in refereed publications!])

Schweitzer, Strauss, and Finding the Courage to be Intellectually Consistent in Faith

Because of an ongoing conversation with a colleague, I’ve been revisiting Albert Schweitzer’s The Quest of the Historical Jesus (a poor English translation of the original German title: Von Reimarus zu Wrede: Eine Geschichte der Leben-Jesu-Forschung [Something like “From Reimarus to Wrede: A History of ‘Life of Jesus’ Research”]). As is the case with anything I read from Schweitzer, there is much to talk about here. The man not only has keen insight into the (in his view troubled) search for a historical Jesus, but he has keen insight into the human mindset, particularly the mindset of humans (like him) not content to accept answers provided to him as faith claims. He is always wanting to know “why?” or “says who?” (in addition, he writes beautifully). He also recognizes, though, that this desire to want to know the truth is often dulled by the voices of tradition or authority.

I didn’t have to read far in the opening chapter to find something that I think is worth spending a lot of time considering (and, not surprisingly, it has very little to do with a quest for the historical Jesus). In outlining what he plans to do in this seminal work, Schweitzer is quick to introduce the ground-breaking work of David Friedrich Strauss, who is, in Schweitzer’s mind, one of the few great heroes of the so-called “quests” (in fact, in the chapter detailing Strauss’ work, Schweitzer will refer to Strauss as a prophetic figure, pre-figuring Schweitzer’s own work on Jesus). What Schweitzer so admires about Strauss, more so than his shocking conclusions about the historicity of the events recorded in the New Testament gospels, is Strauss’ courage to carry through his intellectual program, to fight against the spirit of his time that was urging him to find a Jesus who looked a lot like the Jesus the church had always found. Unlike so many before and after him, Strauss maintains intellectual consistency in reporting what he finds when he looks carefully and critically at the gospels.

Some context of Strauss’ work and career may be necessary to invite appreciation for the depth of Schweitzer’s admiration. Strauss’ work on Jesus, The Life of Jesus Christ, Critically Examined (Das Leben Jesu, kritisch bearbeite, 1835, ET 1846) was a bombshell, not only for Biblical scholars, but for Christians around the world. Strauss walks slowly and systematically through the gospel accounts of Jesus’ life, showing that at every turn (in his mind), it is more likely that that gospel narrative is the invention of a later church, written as apologetic history to give legs to the burgeoning church. For example, Strauss reads Matthew’s narrative of Jesus’ birth in Bethlehem, flight to Egypt, and return to Nazareth alongside Luke’s account of Jesus’ birth in Bethlehem, dedication in the temple, and then return to Nazareth, concluding that it is not plausible that both gospels are recording events “as they happened.” Strauss shows in great detail how the chronologies simply don’t work, given what we know about geography, travel times, etc. He finds, in most places in the gospels, a clear motivation by a later church to tell stories about Jesus to fit a prophetic hope. The gospels are, for him, fantastic retellings/inventions, rather than “history” in the sense his positivist contemporaries (and most of ours, for that matter), were thinking. Strauss’ book is an impressive work of historical method, but it was met with great resistance and outcry. In many churches today, of course, such rigid historical analysis would be “welcomed” as blasphemy. So much more was the case, though, in Strauss’ time and context. The book made him infamous across the continent, and it cost him his job and in many ways his entire academic career. Seminarians today worry about their professor “taking my Jesus from me.” Well, the reaction to Strauss was far more severe. He lost his job, and he struggled for most of his life to find one.

Now, there’s much more to say about Strauss, but let me return to Schweitzer, my reason for writing. Schweitzer recognizes the brilliance of Strauss’ account, and he praises him throughout the book as a good reader of history. However, his reason for praise so early in the book is Strauss’ intellectual honesty, consistency, and, above all, courage. What makes Schweitzer the most angry in his summary of the many quests is not bad history (though he doesn’t like that, certainly), but timid history. That is, Schweitzer’s famous conclusion that the 19th century “questers” are guilty of writing themselves into the Jesus they “find” (“There is no historical task which so reveals a man’s true self as the writing of a Life of Jesus”) is really an indictment of his predecessors’ fear that keeps them from writing about the Jesus they really found. Instead, he accuses them of being influenced by the “spirit of the times” and shaping their Jesus to be more warmly received. For this reason, Strauss stands alone (well, he stands above a few others like him, notably Reimarus, who did much the same, but only allowed his name to be put on his work after his death). Strauss did not do pull any punches; he wrote about the Jesus he discovered when he read the gospels as carefully and critically as he would any other work of history. For his courage, he garners praise from Schweitzer, even more so because he also garnered so much criticism from his contemporaries.

Let me share a quote from Strauss that Schweitzer includes in his introduction. The quote is from an older Strauss, as he looks back at his decision to write that book and his assessment of the notoriety (infamy?) that resulted from his decision to publish his work on Jesus (at age 27!):

“I might well bear a grudge against my book for it has done me much evil (‘And rightly so!’ the pious will exclaim). It has excluded me from public teaching in which I took pleasure and for which I had perhaps some talent; it has torn me from natural relationships and driven me into unnatural ones; it has made my life a lonely one. And yet when I consider what it would have meant if I had refused to utter the word which lay upon my soul, if I had suppressed the doubts which were at work in my mind-then I bless the book which has doubtless done me grievous harm outwardly, but which preserved the inward health of my mind and heart, and, I doubt not, has done the same for many others also.”

Schweitzer is amazed by the courage this man showed, as am I. Regardless of how one feels about the Jesus that Strauss uncovers in his work (and trust me, there’s much to criticize about this method and his conclusions), I, like Schweitzer, stand in awe at the boldness in his publication. Far too often in the church (and even the academy), readers feel pressure to temper “what I really think” with “what I’m supposed to think,” and I think we all suffer because of it. It is a shame that so many people who are critical and creative thinkers Monday through Saturday feel the need to turn that mode off on Sunday. The result is that we deny “the inward health of [our] mind and heart” for the sake of church or academic tradition. We may feel like the narratives and explanations we’ve been fed wouldn’t stand up to the type of logical or critical analysis we apply to other parts of our life, but we back away from applying such analyses, fearful of how the traditions we’ve been raised on might fare in light of taking the initiative to read and study carefully. A good and thoughtful pastor reiterated this for me this morning when he decried our collective unwillingness to let the open-endedness, contradictions, and confusions in the text lead us where they might. I hope for the type of boldness that Strauss showed, and I hope others will exhibit it as well. Jesus encourages a love for God with heart, soul, strength, and, yes, mind (Lk 10:27). I think we often forget that last part. Let us remember, that if the heart is the right place, the mind engaged in the text is free to wander, to (gasp!) play. We might begin a bit scared about where that will lead, but if we let that fear dominate, leaving us only with a blind, uncritical acceptance of what we’ve been told, I think the consequences are far more dangerous.

So, I take my cue from one many might consider an arch-heretic. Thank you David Friedrich Strauss, and thank you Albert Schweitzer (certainly no heretic in the popular perception, but reading him carefully suggests he might should be considered one!), for prompting us to intellectual consistency and boldness. I think Strauss would probably encourage us in the words of his German compatriot Martin Luther: pecca fortiter (sin boldly).