Sunday, November 29, 2009

Pre-Installed repositories versus self built

Not sure how this got past me but I just realized I failed to discuss the possibility of downloading a pre-installed VM (Virtual Machine) versus building your own. Well better late than never...


I believe having both options is actually beneficial if you plant to first build a repository, configure it and get to know it and then perhaps either have it packaged for a new installation. Alternatively, if you have already built a prior version, but just need a quick, stand-alone system you could download a prebuilt one to avoid the needless downtime you would incur.


A pre-configured system (hopefully built to your needs) is helpful and provides rapid deployment of resources. For example, if you have an event or exhibit rapidly approaching and not much time to create a whole new web presentation unloading a boxed repository, like Omeka, will get you up an running within a couple of days. This system should have everything you need to either promote or enhance and the usability and discoverability of your digital objects.


On the other hand, learning how databases work and the various tools within your repository will ultimately provide greater functionality and a base understanding of what's going on behind the scenes. Why is this important? Long term maintenance, sustainability, and trouble shooting. Should something go wrong you'll have a much better understanding of where the problems could lie and how to get your system back up and running. You'll have a better opportunity to build your own tools and hook into your repository to connect to other services. Building for the "ground up" also provides the developer with more opportunities for modulatory construction.


My current technical skills are rudimentary at best as far as network administration and database development. I require a lot of instruction and reminders to achieve a fully functional repository up and running so I certainly see the advantage of having an out-of-box solution. In my case having one to create an initial web presence and making content discoverable is very advantageous but I would also want to then build that same repository from scratch and then replace the "out-of-box" with a customized and more functional system when it is available and then employ the OAI Object Reuse and Exchange protocol to seamlessly migrate the content to the new system.

Tuesday, November 03, 2009

Federated searching through OAI-PMH

Federated searching conceptually is the ability to search across repositories (or web pages) to get access to a large group of resources version only those resources and individual organization may hold. A useful federated collection is one that returns consistent results quickly. Past federated searches (via page scraping) were almost useless as inconsistent metadata and overall slowness from the various web sites the crawler might find made results unverifiable. A search on the same keyword only moments later could be entirely different than a previous one. Hierarchical results or scholarly worthiness were almost impossible to return. Metadata was not normalized and fields were assumed by the crawler based on CSS code and other style tags. The advent of the Open Archives Initiative: Protocol for Metadata Harvesting is a redirection of federated searches by creating a standardized system for open access collections to retrieve organize and ultimately index materials to create far more meaningful and faster searches.

Having a harvesting service that searches a standardized methodology for information will not only provide results with "Google" speed but in a consistent fashion as the right information is formatted in the correct locations. Another factor to consider is size. Some repositories hold far more data than others and whether a massive archive of data is better than a smaller, specialized search really depends on your needs and what you plan to do and how much of a knowledge base your research requires. It also depends on how the harvesting is utilized in a user interface. If the tools are clunky then users will not go there. Results need to be integrated into something that makes sense to the researcher but also brings the data to you. If you have to hunt through several layers of facets and taxonomies to get to your information you lose valuable time and probably look elsewhere for other resources.

For our latest course unit we were asked to search three to four repositories from the Open Archives service providers list and the University of Illinois OAI-PMH Service Provider Registry. Essentially all the searches I performed met these criteria, but the few that were unusable (not included in this article) were due to the original data provider either no longer participating or only creating an initial instance but no items to be found.

First I searched citebase Search from Southampton University. Citebase Search service provides users with the facility for searching across multiple archives with results ranked according to many criteria, such as creation date and citation impact. I searched for keyword "Medieval Castles" and retrieved one result. However, full text was not available from server. Through the same collection I searched the "H1N1" in reference to the current health scare and received 69 records in 0.153 seconds. The collections represent in the results were from PubMed Central, Virology Journal, and BioMed Central. All were appropriate scholarly collections.

Next, I searched the Sheet Music Consortium at UCLA. The Sheet Music Consortium is a group of libraries working toward the goal of building an open collection of digitized sheet music using OAI-PMH, which provides an access service via the standardized metadata to sheet music records at the host libraries. Representing collections are UC Los Angeles, Indiana University, Johns Hopkins University, and Duke University. I queried the term "ragtime" and found much better easy to use interface where actual repositories were clearly labeled within each browse item. I also noticed an interesting tool allowing the user to annotate citations, although it looks like you have to be signed in to utilize that feature. I also went through List Virtual Collections link and notice it listed which items required a password to view. I found that only being able to search one collection or "all" at the same time via the advance search problematic. It could be useful if a researcher wanted to search across a large collection of materials from only a couple of repositories if they were looking for some specific aspect of sheet music. Of course they could also just search each collection separately if granularity was really an issue.

I then looked at Cornucopia from The Museums, Libraries and Archives Council. Cornucopia is an online database of information about more than 6,000 collections in the UK's museums, galleries, archives and libraries. Cornucopia is very cross disciplinary and expresses that in their colorful and graphically based splash page. The initial page is very text based but once you get into the searching you are within what appears to be a standardized, text-based Institutional Repository listing. For example, I chose subjects, then coins & metals, and then I chose the Medals collection from the Royal Signals Museum. The citation for the collection resolved but unfortunately the link to their web page was broken which brings into question how up-to-date the aggregation is. It appears that all the results I gathered were about the collections but not necessarily the items themselves which as a student researcher I would find frustrating. I do not think this service was intended for itemized research but to point you to where certain types of materials are held. I didn't find this altogether useful without the direct linking to articles as in the other harvested collections.

Finally I searched the MetaArchive from the Library of Congress. The partner institutions of this project are engaged in a three-year process to develop a cooperative for the preservation of at-risk digital content focusing on the culture and history of the American South. There are about twelve institutions within the providers list and again I didn't see a way to actually search the archives. There was no search box. I also could not find any type of real search box but a series of links starting at Electronic Thesis and Dissertations (ETD) and Southern Digital Collection (SDC). I clicked on "Collections" got a list of titles and then chose "The History of Blacksburg, Virginia" which ultimately led me to an actual article. This particular collection although full of valuable resources was very time consuming and not seems to have a lot of work left for it to be usable. Perhaps this information is being utilized by another search service making the data far more useful with a standardized search method. Right now, you have to do a lot of hunting and clicking to find things.

So the real key here is user interface. It appears we've overcome a hurdle of gather data, now an issue is making that data come together in a meaningful and useful fashion to the user. They need a way to quickly find and compare information and then get that information back out to either use or cite as needed.

Tuesday, October 27, 2009

Cataloging when you are not a cateloger

No doubt providing metadata to content is a challenging process. There is what you would consider to be common sense, for example a journal article about feline leukemia would obviously be placed in the subject areas of cats but what about more granular areas and to what degree do you include metadata? Do you also put the article in veterinary medicine, pets, and/or cancer research? Fortunately we have metadata specialists, or catalogers, who are experts in taxonomies to handle this but I am not one of them and I have to go off what I assume to be the correct fields.


In the case of my trial Eprints repository I chose to use content I created for the Library Channel and in doing so I had to work with both the categories and tags we use to organize our materials for navigation on the website. Immediately I threw out our categories because they are specific to campus and function, which does not translate to a generalized repository. I did however use our tags (which technically are designed as a web 2.0 form of classification for user creator classifications) because those better represent the subjects each programs deals with and were provided by myself and a real life librarian who oversees our productions. Those items were then plugged into the default keywords field since that particular area is not a controlled vocabulary and allowed for greater flexibility.


Secondly, I took the main concepts of the videos I was ingesting, that they are produced at a university, by a library and deal with specific topics and then picked from the selection of Library of Congress (LOC)subject headings to actually indicate the items subjects. I believe that some future seeker of information looking for content about library instruction or the topics they dealt would use this information to find the materials. The more looser, granular topics are in keywords and can be searched there as well.


I can’t say I was overly concerned about consistency other than maintaining similar subject fields and using the keywords that were originally used to facet the materials from their original publication interfaces. With such a limited collection this was not a problem, however if the collection were to grow I would have to be more concerned that each item was receiving the same level of care and consideration as it is being ingested. I would also go back look at trends. I did for example take a look at the items by subjects and noticed out of six objects that one was had a lower number in the “Library Sciences” subject heading and was able to correct it accordingly.

Tuesday, October 13, 2009

Stats and Digital Collections

This week I'm am just thinking out loud about what do you do about Stats and digital collections particularly if your collection is dispersed throughout the web and not necessarily housed locally.

The Association of Research Libraries (ARL) stats have come up once again this year. They are a big deal if you want to maintane your standing for bragging rights and accredation but as usual they are a challenge.Historically, ARL stats were about how many books and journals your library held. Now entering the digital realm we are asked about how many files we have. The challenge begins in one respect because we never know what they are going to ask for since the questions can change from year to year. We then have to start throwing these numbers together in a near panic. But how meaningful are they? How do we determine what files (how many there really are).

Currently we disseminate a lot of material via web services such as the Internet Archive, YouTube, Vimeo, iTunes U, and of course provide syndication through our blog and Feedburner (for podcast optimization). We also use Google Analytics to track statistical information about viewership and hits on our web pages and catalog. However, that only works for things we have control over and that we have already hooked into Google Analytics and let run over time.

Each of these services provide their own specific forms of statistical feedback. Google owns YouTube and Feedburner and both provide dynamic graphic maps of users and views. Feedburner shows you subcriber info such as what tools they use to get to subscribe (Google Feedfetcher, iTunes, Sage etc.) and gives you a rough sketch from the past day to the past year of your traffic. Vimeo gives you a basic view count and like YouTube will even tell you who is subscribed to your materials through their interface.

The Internet Archive also provides a download count on your item page but that doesn't always relfect the number you see in the browse interface. I've noticed the same information for files that filter from the Internet Archive through Feedburner's RSS engine don't always match as well. So if you use an outside party to host and assist in the serving of your materials, how accurate are their statistics. Are they padding or hiding results? Is the statistical analysis outdated?

On a simple file count issue, how do you handle derivatives? On a basic local level you might have an archival tiff or JP2 file with a corresponding set of transport versions. Do you count them all or just the ones made public? If the files are essentially the same content but modified for playback ability are they counted together or separately? Our files loaded externally to the Internet Archive also have format derivatives created by the IA itself. Do we count each of those and what about the separate files we additionally load to iTunes U.

We were recently asked asked about a particular digital collection we put online several years ago as part of a collaborative grant with GWLA which would have been no problem except after we put in our share of content (housed on a local server) we did nothing to promote it and certainly were not tracking the searches within our collection. There was no way for us to tell what was going on with it. Our catalog had a link to the material but only pointed to the consortium's main web page. (Note: The consortium uses an OAI-PMH Open Archives initiative protocol for metadata harvesting to access our materials in their search engine). We can see if anyone put in a search for the materials via our Google Analytics tracking of searches in the cataloge but as far as that goes it shows a big zero. So as far as we can tell the collection is an vastly unused assortment documents.

On a related note, we also are unable to retrieve statistics from our files we house in iTunes U. iTunes U servers are owned and operated by the Apple company but our the client brands our "channel" with our specific look and feel developed by ASU programmers. Unfortunately they have not had dedicated resources to hook into the API's which would give us stats.

Statistical analysis of digital collections is far, far more complex than counting bound materials on a shelf and the number of times they are checked out. We have to take into consideration servers, hits, views, downloads, searches, and file derivatives just to skim the surface. What does it all really mean and what is the information that really gives you the best feedback about your collections and their usefulness? The lesson learned here is to make sure when you are designing and developing your online collections that at some point you are going to be asked for stats and you had better have a system in place to provide real data. Of course anticipating exactly what those questions are is key.

Tuesday, October 06, 2009

DSpace on my VM


I actually installed DSpace this week and created a couple of collections. Unfortunately they are video and audio files and don't play in the interface. I will probably keep this instance and play with it overtime to see if I can get them to display properly. Currently I'm assigning test users and working with the metadata as well but here's the first look:

Friday, September 25, 2009

Working with Drupal

We've spent the last two weeks installing and configuring a basic Drupal site. This week was focused particularly to installing a dizzying array of modules to essentially make the site more workable as a collection for an archive of digital objects. In addition to the requested modules such as faceted search, .... I installed a module called Lightbox2. Initially it didn't do anything but I continued reading through the how to pages and found the settings info on the page Using Lightbox2 with TinyMCE or FCKEditor which made everything come together. Now the truly cool thing about Lightbox2 is that when you click on your thumbnail the module places images above your current page, not within. You can then click to see more detail, go to the next image if they are grouped or close it. The floating image also displays descriptive information.

I have not tried them yet but the module works with video and slideshows. All things considered this was a very easy module to setup. Based on our prior instructions I also went in and granted permissions to authenticated users to use the module and I also knew the commands and places to go to make the whole thing come together.


Below is how one of the images in my collection looks after installing and enabling



Personally I think it's pretty slick.

So is Drupal suitable to display my collection. Yes and no. Yes, because I was able to within minutes find a new module, install and setup to improve the look and feel of my image presentations. There are a whole suite of modules to work with to improve navigation and the overall presentation of my content.

However, I can also say "no" because Drupal is not a turn-key out of the box solution. It takes work and lots of experience to grasp how all he components work together and finding just the right tools to make a usable collection worthwhile.

Furthermore I have just scratched the surface of working with Drupal. I only have had a glance at how it works and also am not really cognitive of the problems that may arise. I feel that Drupal in itself probably isn't the solution. It can help with the presentation certainly but the management and ingestion of material will probably require something else entirely. I think building web development skills with a service like Drupal is very important,useful, and workable but gaining knowledge on metadata tools, and other data management software is required. If anything to know the strengths and weaknesses of system you are working with.

Tuesday, September 15, 2009

Tech assignment so far

Drupal, Drupal, Drupal...

To date my most proud accomplishment has been installing and initially configuring a Drupal site. It truly is not much to look at, or use, but it works and that is a huge step. I have worked with Drupal in the past but only in the area of content management and not design. Being able to configure the navigation, look and feel, and ontology of the site will be a huge, huge step in being a better project manager and web developer.

In the process, I have installed two separate virtual servers reinforcing what I learned in the last class and hopefully making systems administration less intimidating. Having a peak into the world of systems administration means having a greater ability to communicate and work with the "real" systems administrators out in the field.

The assignments seem to be going at a nice, moderate pace. Being sick this past week slowed me down a little but I have still been able to keep up with class and not fall behind, which is always my worst fear.

Although I am only enrolled in one course this semester, having two instructors means I have to stay on my toes about what I need to do and have done. It's not quite as linear as the prior class (IRLS 672) in the sense that we are running two parallel paths that occasionally (and ultimately) merge. The last course was a bit more intense but followed a very straightforward progression. Also having to do more analysis about management issues while dealing with the tech issues means my attention is progressively divided. So I welcome the slight change in pace.

Tuesday, September 08, 2009

Beyond HTML... back for IRLS 675

Our first blog assignment for IRLS 675 was reviewing the 2006 special Content Management addition of Library Hi Tech. The article titles alone indicate this is a great resource for libraries struggling with migrating their web resources from static html pages to a robust CMS. I chose to cover the article “Beyond HTML: Developing and re-imagining library web guides in a content management system” by Doug Goans and Guy Leach. Although three years old, the lessons learned can be carried forward to current technologies as they pertain to organizing and understanding a move to a CMS from a static html (FrontPage or Dreamweaver) way of doing things.

The article talks about how a Georgia State University Library web librarian, web developer, and a team of liaison librarians implemented a new system of subject guides and utilized user testing and surveys and how it met and exceeding their expectations. A key finding from the report illustrated how libraries need to give as much attention to user interface issues as to technical (behind the scenes) issues. Organizational buy-in is critical and from my experience I find this true.

The GSU library went from liaisons having full access (with a range of abilities) that created inconsistent and un-credible resources. Content, time and enthusiasm differed between the pages within the website. Thousands of pages were amassing and in one instance an entire directory was accidentally deleted.

The team realized they needed a database driven system that took the content away from form the layout and navigation. This would also allow content to be more readily reusable and object oriented. By limiting the liaisons access to the visuals it actually freed them up to concentrate on their content. The system would need to be flexible and meet the needs of users while still maintain adequate security.

The GSU team investigated commercial solutions but found them to costly. They also looked into open source options but felt at that time (2003 – 2006) that those options were not robust enough for the needs of a university library and instead chose an in-house solution. This was a very similar process that the ASU libraries took, but ultimately chose the open source Drupal option since it was robust enough to handle our needs by the time we started our investigations for the same issues.

The key to their success was their involvement of user testing, surveys, training manuals and workshops designed to educate and get buy-in from the rest of the staff. Usability seems to get overlooked on many projects and in the case at ASU was not taken into consideration as much as it should have been during and earlier assessment of the website, prior to the Drupal migration. Our second venture into updating the library website was far more successful because it involved training, user testing, and also the dynamic web environment, similar to the GSU example, that gave the back control of content to the librarians while keeping the navigation uniform and in the hands of the designated web team.

Tuesday, August 11, 2009

Project Management... not as dull as you may think

I'll be honest, some of this week's reading was not the most exciting, curl up by the fire stuff I've ever delved into. But it was probably some of the most important readings I've had to date. (Sometimes I think I say that each week.) Out of the stiffer material, I had to laugh at H Frank Cervone's article on How not to run a digital library project especially his observation that the thought of project management causes librarians eyes (and just about everyone else's) to glaze over. His humor and down-to-earth language get the point across without being overly academic.

I get that same simultaneous understanding and panicked look from people now when I start preaching the virtues of project management and risk management. Arguing the case to make sure we get the equipment we actually need, are able to use, and have the capacity to maintain instead of just spending money because "If we don't it goes away" is not the most pleasant of experiences but I'm now convinced you have to take that perspective for any project to be effective and efficient. Keeping a goal and end date in mind is great advice that gets left out a lot. The comforting thought here is that we are obviously not alone. Scope creep, incorrect budget projections, lack of buy-in and wild guesses seem to be the norm. I've seen this not only within my department but at the University level as well.

I can't recommend Cervone's writings enough. Do a check in the Wilson or Ebscohost databases for "library project management" and his articles dominate your search results. Much of them are included in our IRLS 672 readings this week. That's because he knows what he is talking about.

Large technology projects at universities have very high stakes and a good project manager who has successfully coordinated the complex iterations of a project can save thousands if not millions of dollars, thousands of hours of work and the sanity of the people working on it. Also, knowing when to pull the plug on escalation or when a project simply won't work as intended is critical to preventing ultimate failure and undo waste. Mark Keil's article Pulling the Plug: Software Project Management and the Problem of Project Escalation graphically lays out the factors that contribute to scope creep and overgrowth of projects to the point that they suffocate themselves. How many projects are implemented, even though no real plan or analysis was thought out, only to end in failure after half a million dollars and years of work have been needlessly thrown away?

I'll have to go over the project management articles again to get more out of them. It was a lot to take in within a week but the overall concepts of organization, preplanning, dynamic groups, being user centric and honest about your success and failures have already sunk in.

Sunday, August 02, 2009

Reviewing IRLS672

When I started this course I felt pretty knowledgeable about the in's and out's of Digital Repositories. I realized I didn't know much about the back end of things but really didn't think to much about it, since our developers generally handle that stuff.

I think quite differently now. The back end "is" the repository and how you set up your system's architecture is crucial to the maintenance, delivery, and organization of the data you are trying to present. The most helpful aspect has been looking at what you are building and determining, in advance, what you plan to do with it and how you will get there.

Pre-planning with functional requirements, data modeling, and just having concrete goals are paramount and I have already been integrating those aspects in what I am doing right now.

I also have a much greater understanding (not complete of course) and respect for the programmers, developers, and system administrators I work with everyday. A digital repository isn't just about a clean interface and fancy widgets but robust relations between objects and efficient ways of getting to them.

I can have conversations and attend meetings where topics like our database infrastructure (Fedora, MySQL, PHP, etc.) don't make my eyes glaze over.

I also realize I have much more to learn and must always be educating myself on the technologies and examining the functional goals of collections and interfaces we propose. This is something that will be a constant in any digital enterprise I participate in.

Tuesday, July 28, 2009

The Home Stretch

Well, here we are. Only two sections left after this. I'm going to take the unit quiz tonight (at the last minute) to see how I've done. This week was a lot of MySQL query "stuff". I'm not sure if the concepts themselves are any more difficult than anything else we've learned and would say that last week's data modeling section was far more tricky but those lessons were crucial to understanding how SQL language works in a practical sense and why we build tables the way we do.

We were introduced to a new way of interacting with MySQL via phpMyAdmin and I found the tool quite useful and intuitive. It even shows you the code you created while filling in the tables verifying you inputting the correct info. A co-worker recommended MySQL Administrator but I think for now I'll stick with either Webmin, phpMySQL, or the command line based MySQL Monitor.

The most difficult time I had this week was working with joins and getting the syntax correct. I was particularly challenged when trying to tie three tables together becuase if the phrases were not constructed (as with any query) I would get error message saying things like "collection_id is vague in clause...." Left and Right joins were also tough and didn't seem to produce correct results if I tried to tie more than two tables together. But I gave it a shot. I haven't looked at what is in store for next week but I hope it builds on this week's and last week's lessons, reinforcing what we are learning

Tuesday, July 21, 2009

Data Modeling

We're in our 9th unit now and to get a better grasp of what it takes to put together a solid repository we've been going over data models and entity-relationship models. On the service it makes a lot of sense but in practice is quite challenging. When you have to stop and think about how attributes relate to a particular object or entity and how to best break that down for database management you can quickly get frustrated.

The areas I'm going to really need to work at understanding or normalization and Entity Relationship Diagrams. I just need to continue reading and seeing examples of how these are put together. I also need to find some more tutorials on how to decipher the crows foot notation. Unfortunately the wikipedia entry is rather vague with a diagram that doesn't go into too much detail. I'm still not altogether sure what the notations mean and how to use them. I wonder if having a solid background in calculous, physics, or logic would help.

I think another problem with most of the tutorials is they each tend to stick with one example. Seeing how different types of repository data can be modeled would be helpful. For example lets have some basic examples of manuscript, photograph, and other document collections modeled and see what others have already done to model, relate, and normalize their data-streams. I'm sure with a little searching I'll be able to uncover some other examples but it seems that either institutions keep this information close to the chest or they haven't actually done this yet.

The good news is that as I work through these concepts (with a little more time than these past few days) I'll have a much better understanding of what the programers are having to tackle behind the scenes. If I've learned anything out of this, its that a working digital repository is far more complicated than just placing objects on a server and filling in metadata fields. You have to understand how items relate to each other and clear up ambiguity, redundancy, and non-essential information to improve the speed and storage capacity of your database.

Tuesday, July 14, 2009

Technology Plans

This week's class section was entirely about technology planning in libraries and was probably some of the most reading we've had yet. Technology planning has been historically overlooked and mishandled in many organizations.

Coincidentally we've been dealing with this at work for that past several weeks as we've stepped back to reevaluate the purpose and function of our digital repository. We're in the midst of creating documentation about and for our digitization efforts as we try to come to terms with the scope, development and commitment of our projects. We'll be continuing this process for some time and as we are creating "living documents" we'll have to periodically revisit our plans to evaluate its effectiveness and adapt to future situations. I will probably be involved in technology planning throughout my career considering the complexity of digital libraries/repositories software, hardware, and the specialized staff who offer front line services and the management and maintenance of the systems.

This unit could not have come at a better time. I was able to bring our readings in and share them with the rest of our functional requirements group to help guide and also back up many of our decisions. I think I've also indirectly been promoting the SIRLS DigIn program buy showing all the value I'm getting out of the course in only the first section.

Some key points I've taken away from the readings this week include:

A plan must be a living document meeting the the mission of an organization and you must be not only be aware of technophobes but vigilant against technolust to avoid scope creep. (Stephens, Michael. Technoplans vs. Technolust. Library Journal, November 1, 2004).

A technology plan must be flexible. It is an ever changing, political document explaining in simple language to investors (internal and external) that you know what you are doing. It should be generic enough to get the point across without committing to too many specifics that you honestly can't anticipate and most wouldn't understand. The Arizona State Library's 2007-20012 technology plan (pdf) does does a wonderful job of laying out lots of goals and needs without getting bogged down in too much detail that would undoubtedly change drastically in the five year period it covers. (Michael Schuyler's Computers in Libraries article Life is what happens to you when you’re making other plans. Computers in Libraries, April 2000, pp. 54-55)

Staff, training, support, and maintenance have to be taken into consideration when making plans and also applying for grants. Most libraries cannot afford to offer sustainable services through self funding and will need to coordinate with other organizations, particularly state libraries to leverage funds. Libraries must make sure government policy makers understand the importance of funding and the impact of programs like Library Services and Technology Act (LSTA) and E-rate assignments. (Bertot, Carlo et al. Study Shows New Funding Sources Crucial to Technology Services. American Libraries March 2002 v(33)n(2) 2002 pp. 57-59)

I compared the California State Library (pdf) and Arizona State Library plans, since I've have lived in both states and have a vested interest in them. I was impressed with their focus and noticed trends specific to their function and politics. At first, I was a little perplexed that they weren't more "specific" until the readings emphasized the flexibility and generalized nature these plans need in order to be successful.

Although not necessarily a library tech plan I found the Scottsdale Unified School District tech plan (pdf) to follow along with the best practices of the readings. It emphasized needs, mission, budget, and provided several goals with concrete implementation strategies. It also laid out objectives on applying for E-rate annual funding.

Ultimately I've taken from this experience that operating without a plan is like putting your cart in front of the horse. You won't get far and you will loose out on partnerships and funding critical to the survival of your digital initiatives. Most technology projects fail not because they lack expertise or initiative but because they lack solid planning, buy-in of stakeholders, organization, and reasonable scope.

Tuesday, July 07, 2009

Learning XML Unit 7 Blog assignment!

Learning XML
About four years ago when we started podcasting in the library I had to learn about XML (Extensible Markup Language) to create a functional RSS feed. We were also asked to begin investigating the pros/cons of RSS and RDF as we first started research work in developing our digital library. So XML quickly became a major part of my working life whether I liked it or not.

I initially learned XML by just looking at what other websites, podcasters, and repositories had done and how they were organized. I also went through various web tutorials, W3.org info pages, and even the Apple iTunes help pages on what elements were necessary to make our feeds work. This process was very unconventional and for the most part worked for my needs but was spotty and to be honest confusing.

Unit 7 of our class has been largely focused on the purpose and use of XML. We were instructed to view a series of instructional videos by Mark Long. Those videos, along with our readings, and the course lecture and assignment notes have been invaluable because I not only learned basic XML structure, which I was familiar with, but the basic rules that apply to that structure. I learned that "well formed" XML may not necessarily "Validate" against the rules of the DTD it is associated with. I don't find XML to be quite that confusing anymore but then I haven't even tackled XSLT (Extensible Stylesheet Language Transformations) yet.

The most helpful modules in the videos tutorials were the ones on structure (5 golden rules!), special characters (watch out for those greater than symbols), and most importantly attributes. I'm still not 100% on when to use or not use attributes in XML (I had actually never considered them myself) but it seems as though, most everyone else is still asking that same question.

Practice system update!
So far, so good! I followed the instructions in our Unit 7 assignment and was able to connect to my new "server" remotely, and even though it's a laptop run it in Headless Mode, which for non techies means I didn't use a keyboard or monitor actually attached to the system. In fact, to emulate the sense that the system was indeed headless, I closed it up and stuck it under my desk. Worked like a charm. I have been simultaneously running my VM system as well for added practice and was able to ping both systems and essentially have two separate servers running in my house. Albeit, closed systems. Each system also now has a personal web space for a user, which helped me understand the whole public folders at both the ASU and UofA personal web space accounts.

Monday, June 29, 2009

Learning HTML

This week we were asked to talk about how we went about learning HTML and the resources we used.

I probably began learning html around 2001 or 2002 which compared to many people I know was late in the game, but my concentration prior to that only required I view web pages, not write them.

My first resource was my coworker and onetime supervisor Kirk. He gave me a lot of initial pointers and basically said to just look at other web pages and see what they were doing and try to avoid using "flashing" text. I also used the W3 school to get the basics and details down. I still use the school today. In fact, I had gotten so use to Dreamweaver and bouncing back and forth between the code and the WYSIWYG editor that I had to double check hand coding basic things.

Some of the more intermediate or advanced areas I looked at were table cells attributes like padding and alignment. I also looked into CSS which I use regularly in my work related dealings but not from scratch, so at this point I have not integrated a stylesheet into my class webpage. Since I'm coding just one or two pages (at this point) I'm not saving much time but if my page were to grow or become more complex I would need to invest the time into a seperate or inline style sheet to better handle the look of the page.

I really wanted to just hand make my class page without the aid of any GUI interfaces. For example Word (and products like it) is one of the worst ways to build a page as it introduces a lot of span tags and back end code you are not aware of and don't need. Some of those interfaces also have a tendancy to leave empty tags hanging around when you cut or delete content.

On an additional milestone I finally installed my practice system and eveything went well. I was able to ping by host name and assign static IPs to both my practice and VM servers. I was even able to see the webpages via m iPod which is connected to the network wirelessly. Now if I can continue to remember all the steps involved I'll be very impressed with myself. The only hangup I had was when I was pinging my practice system and got "host down." I had forgotten to plug in the ethernet cord.

Monday, June 22, 2009

Reflecting on Learning - course presentation materials for our Networking Unit.

This week in my Applied Technology course we had a variety of learning materials to work with including the standard lecture document, a chapter out of the Linux Administration Handbook, some videos and other links to technical materials.

First we had the course lecture. I had to read this one at least twice to grasp all the details and make sure I understood where Professor Fulton was coming from. That paid off, because as I went throughout the related/suggest links and readings everything came together and was reinforced.

Taking a look at the Linux Administration Handbook's chapter 12 (2nd Edition) on Networking much of the professors lectures were repeated but from a slightly different narrative and as usual full of details. Although the book says its not necessarily designed for beginners, I disagree, its a must have. The historical references and figures were a great compliment to the lecture materials. The footnotes also turned out to be just as important as the paragraph materials, particularly in the section dealing with the "Request For Comments" documentations.

The Wikipedia links were invaluable for their brief synopsis and their structure allows you to skim through if you choose and hit the highlights. Also, being able to drill down through subsequent term links helps build a better understanding of the subjects we're investigating.

We were also asked to watch a few videos about networking. TCP/IP - An Animated Discussion was a basic analogy of how the protocol works. Even though it looked like it was created in Microsoft Paint I found it effective and even entertaining for a subject that had me dose a bit in the Nemeth book. Heavy in humor and analogy the video can be watched several times and goes into the alphabet soup of Internet protocols and systems. The video reminded me of some old Annenburg educational videos I use to like watching in high school, so I found it very easy to watch and retain.

A far more surreal and even effective video was Warriors of the Net. Somehow the creators of the videos were actually able to personify things like "The Ping of Death" and "The Router Switch." This is a highly produced, animated video that takes a look at what happens when you click on a link and request a service (in this case the very video) from a web page. Perhaps seeing Tron as a kid helped me connect with the presentation. Either way, it really drove home the efficiency of information transmission and how "errors" can happen.

I consider myself a visual on hands (Active) learner so the activities we were given are always the most effective. The videos reinforce the readings and the readings reinforce the lectures and provide the details and history we need to move forward. But the complexity of what we are working with in many ways brings out my more "reflective" half as well. At times this is frustrating because I do feel like I'm getting behind (when 3 days after I've started a section and I'm still reading) but I also realize that if I don't do the preliminary work the rest is meaningless. Perhaps it's just getting me back to the way I learned years ago with flash cards and memorization over simple application and experimentation.

This multi-teared way of learning is interactive, interesting and well rounded. I also find the quizzes very helpful as they provide a sort of "self-diagnostics" on what we are learning and how we are interpreting the information. There were a couple of questions that I had to go back over and rethink to realize I was oversimplifying some of the concepts. I hope the rest of the course is structured like this.

Monday, June 15, 2009

Unit 4 Class Assignment: Adding Users and Groups

For the first time I had almost no problems running through our course exercises. Unit 4 deals largely with permissions and adding users and groups. Our main focus was learning how to add a user and group via the command line and then comparing that with a couple of GUI management tools.

To aid myself in working in the shell, and for future reference, I began writing the basic commands and articles on post it notes and putting them around my desk for easy reference. I used a simple name I could remember for the second user and using the various "sudo" commands as directed was able to add the new user, who was automatically added to a group, and then verify that all went well.

Next we used the Gnome Desktop Utility and being a GUI it not only went well it not only was very intuitive but allowed me to add additional information like contact phone number and address. That was a feature I didn't notice right away with the other adduser utilities. Also Gnome is a feature within the Ubuntu desktop. Opening it was a simple as going to the drop down menus and selecting the application.

Finally we used Webmin and of the three it was probably the easiest to use for actually adding users. The installation was a little more complex as we had to run some aptitude commands to install and configure it. Again I had no problems but had I made one typo or overlooked something I probably would have had different results. Webmin has a very friendly user interface but lacks the advantage of being fully integrated into the desktop (like Gnome) or as simply a part of the system like like the command line.

Altogether I really liked all three ways to add users and if I had to choose would probably use Webmin. However, one thing I wonder about is if you didn't just stick with one utility to add users and groups what issues would you have. Would one utility wipe out the functionality of a user that was created with a different utility if their formatting does not makes sense to each other.

Tuesday, June 09, 2009

Unit 3 Class Assignment: Text editors a hands-on experience

This week our class got a real hands-on trial working in the Linux terminal. To be honest I felt a bit lost at first. I've certainly had to unlearn what I have learned, step back and remember the good old days prior to having the graphic user interface crutch. I've also had to get use to not using a mouse to do almost everything. This was a first scary and wonderful step (at least for me) in to a world of much more control and responsibility.

The readings were not as numerous as before but they were still thick in detail and loaded with info. I wish we actually had an additional week just to go back and re-read everything from the past 3 weeks. Some of the information kind of blends together and when your dealing with commands you've got to keep them straight. Fortunately our instructions and tutorials are very clear and once I familiarized I was ready to go.

There were a number of commands we were asked to run this time around including downloading files ($ sudo aptitude install vim-runtime) which worked great, running programs (vimtutor) which didn't work so well but later after some guidance I was able to get a similar one to work (vim tutor), and we even did some configuring via nano where we made .bashrc alias the "ls" command so it would always show hidden files. That worked great.

Additionally I ran some connectivity tests by using a simple ping command and got the IP address of our VM server (ifconfig).

I found the VIM text editor to be fairly easy to work with and quick to understand. Although I had some technical issues at first, I was able to follow along and complete all of the tasks. The main hurdle was just realizing I was not in the actual command line anymore. The VIM commands make a lot of sense and I agreed with the the tutorials suggestion of using the H-L keys to navigate the screen. Once you get that down you actually can zip around quite quickly.

We were also asked to think about how we have done configurations in the past on our computers compared to configuring Linux files this week. I've almost entirely worked on a Mac at home so I've never really "configured" it at least in the way we did via the nano and vi text editors in Linux. So I can't say this is different than what I have done before since it really feels like a whole new ball game. But perhaps I've been "configuring" all along and just didn't realize it.

I certainly have created/changed folders (or directories) and files in the past by right-click selecting them and using key commands to create folders via the Mac Finder which is almost identical in function to a Windows Explorer window. I have also downloaded clients in the past to do batch name changes. This seems like something that I would be able to do just as well if I knew how in a shell. I just have to get those commands down.

Tuesday, June 02, 2009

Tutorials!

There were three main tutorials for us to go through in our Unit 2 section of class. Two pretty much involved understanding Linux and its history while the third was  a series of YouTube videos made by Professor Fulton on installing the Ubuntu server and (optional) desktop in VMWare.  

I already mentioned Arthur Griffith’s Introduction to Linux  in my prior post.  The next tutorial was a standard text based web page "Learning the Shell" which I printed out.  Using it as a guide I followed the suggestions for working within the shell (command line interface) of the desktop.  It was pretty thorough and I got a lot out of it even though I did not have and example to go by outside of the basic descriptions on the page.  The later sections were of less help however as they were more about listing commands and arguments version actually apply each set. I think the best way to learn anything is through practice and exercises. Talking about something doesn't get the applied knowledge necessary for conquering any skill set.  I need to know why and how to use terms, not just a list with basic definitions.

The installation tutorials were the most concrete and easiest to follow. I had a little concern since my version of VMWare fusion was a tad bit different than the one in the videos but everything ended up quite well.  Being able to pause and rewind to go over tricky spots proved extremely helpful.

There were other tutorials as well, but again they were mostly lists and or general refernce.  Although I have bookmarked them and will assuredly return for review and reference, the tutorials with hands-on lessons were the most effective.

Friday, May 29, 2009

Booting from the Ubuntu Desktop Live CD

This week I downloaded, burned, and booted an ISO file for the Ubuntu desktop Live CD. Honestly I had never heard of the concept (at least to my recollection) of a live CD although most of my co-workers had.

The download, check sum and burning all went well as our professor and the online instructions said. When I initially booted up my iMac everything looked good. I got the splash screen and ran the verification to make sure my copy wasn't corrupt. So far no problems. I was pretty jazzed.

The next step was to actually finish the boot and about this time I was asking myself what issues might pop up. I have a wireless mouse and keyboard (blue tooth) and was concerned. Of course my fears came true. I got an initial firmware error and then once the desktop was finished booting up I could do nothing. The mouse and keyboard were dead.

I checked with our instructor Professor Fulton and the Ubuntu user forums and came to the understanding that since I was booting from a live CD there wasn't much I could do. I did go out and pick up a usb keyboard but for the time being I used a secondary laptop (Windows XP) and burned an new bootable disc. (We were instructed to burn a separate disc with the drive that would be booting it as each computer can be quirky when using discs burned on different systems.) This time I had no problems and was able to finally get to the terminal/command line easily.

Now I'm ready to actually run some lines and see how it goes. I'm also going through a massive amount on Linux tutorials. The videos are great (via VTC) but there are times the instructor is a bit quick and thick with the details and completely looses me. I'm not too concerned though because the written (hands-on) tutorials should reinforce the concept and drive home the things I need to learn. I'll write more about that later once I've delved further in...

Friday, May 22, 2009

Ubuntu User Forum

As part of our class activities we were asked to take a look at the Ubuntu user's forums and comment on anything we find interesting. I took notice of two threads:

Install Ubuntu 32 bit or 64 bit?

Interesting thread on 32 vs 64 bit Ubuntu. Some say there is no problem but that your results may differ. The Users then provide interesting snippets asking and answering the question of what RAM and your processor actually do, and how increased memory actually speeds up your work. Some basic but interesting stuff.



8.04 Keeps Getting Worse

The user upgraded to version 8.04 from 7.10 and began experiencing freezing errors on their laptop and loss of internet connectivity. The syllabus mentioned issues with laptops (I'm considering getting one instead of a desktop as my secondary system) and with wireless causing problems with the Linux install. This threads seems to correlate with that although it looks like most users have no problems.

The responses are very helpful in trying to diagnose an extremely frustrating problem for a user. From my experience I had similar problems but one resulting from hardware failures in in the logic board and power supply. Also, one user "muteXe" along with some others suggested that 8.04 is far more stable than 9.04 which also reinforces why we were asked to stick with the more proven version. For the record, I believe our administrators are using Ubuntu 8.04 for the repository we are building at our library.

Tuesday, May 19, 2009

A Student Again

After at least 5 years I'm taking classes this time and I'm actually a grad student at the U of A now. Distance Learning is amazing.

Monday, March 16, 2009

Testing video chat with work this morning. It rocks.

Friday, March 13, 2009

The weekend begins along with Donovan's spring break so I'll be out and about this next week.

Monday, March 02, 2009

listening to ...Tiffany Anastasia Lowe sung by June Carter Cash album:Press On

Friday, February 27, 2009

Just got back from the park and library with the kids. Good times but man the kids are busy.

Tuesday, February 17, 2009

back to work...looking over the year in review

Friday, February 13, 2009

Finishing last minute stuff on latest episode and calling it a night. Had a great time at 4 Peaks tonight too.

Tuesday, February 10, 2009

Currently I'm working our 90th podcast...this one deals with an exhibit about a spy...

Monday, February 02, 2009

back to work at home
timecode errors during encoding suck....especially 20 minutes in.

Wednesday, January 28, 2009

Donovan just told me a joke...Why did the ants go to the shoe store? Because they wanted to get haircuts. :-P
Lost tonight sweet...

Sunday, January 18, 2009

The Cards are going to the Super Bowl. I don't really follow football and am still amazed. Good game.
Finally caught the Battlestar premier. Crazy, fracken weird. Only 9 episodes left...

Wednesday, January 14, 2009

Another sad day Ricardo Montalban is gone http://ping.fm/aSLJS :'(

Monday, January 12, 2009

Finishing up some work and wishing it was the 21st so I can get LOST again

Monday, January 05, 2009

working on some video stuff...at home