Monthly Archive for April, 2010

Presentation: Twitter for in-class voting and more for ESTICT SIG

Today I presented some of my work on twitter voting to the Engaging Students Through In-Class Technology (ESTICT) special interest group. This group “is a UK network of education practitioners and learning technologists interested in promoting good practice with classroom technologies that can enhance face-to-face teaching.”

I used this slot as an opportunity to try out some some presentation techniques. The first was using Timo Elliott’s PowerPoint auto-tweet plugin which allows you to automatically tweet notes as you work through the slide deck. The plan was that this would provide ready made links and snippets for re-tweeting, favouring or just copying into a users personal notes. I also did this to generate information to twitter subtitle my presentation. An unforeseen benefit was that the tweets provided a stimulus for further discussion after the presentation.

The other technique I picked up from was from a presentation by Tony Hirst in which he included links to secondary resources by only displaying the end of a shortened url. This is demonstrated in the presentation (with twitter subtitles of course ;-) (the link also contains a recipe for lecture capture enhancement):

ESTiCT Presentation link

What I’ve starred this week: April 27, 2010

Here's some posts which have caught my attention this week:

Automatically generated from my Google Reader Shared Items.

Convert time stamped data to timed-text (XML) subtitle format using Google Spreadsheet Script

Wage dislikes spreadsheets
Wage dislikes spreadsheets
Originally uploaded by Dyanna

My post titles just get better and better. As part of my research into twitter subtitling I’ve focused on integrating with the twitter search and Twapper Keeper archive into the twitter subtitle generator tool, but I’m aware there is a wider world of timed data for subtitlizing. When Tony contacted me on Friday with some timed data he had as part of his F1 data junkie series it seemed like the ideal opportunity to see what I could do.

The data provided by Tony was in a *.csv spreadsheet format the first couple of lines included below:

timestamp,name,text,initials
2010-04-18 08:01:54,PIT,Lewis last car's coming into position now.,PW
2010-04-18 08:02:05,PIT,All cars in position.,PW
2010-04-18 08:02:59,COM,0802: The race has started,CM

My first thought was to just format it in Excel but quickly got frustrated with the way it handles dates/time, so instead uploaded it to Google Spreadsheet. Shown below is how the same data appears:

Google Spreadsheet of csv

Having played around with the timed-text XML format I knew the goal was to convert each row into something like (of course wrapping with the obligatory XML header and footer):

<p style="s1" begin="00:00:00" id="p1" end="00:00:11">PIT: Lewis last car's coming into position now.</p>

Previously I’ve played with Google Apps Script to produce an events booking systems, which uses various components of Google Apps (spreadsheet, calendar, contacts and site), so it made sense to use the power of Scripts for timed text. A couple of hours later I came up with this spreadsheet (once you open it click File –> Make a copy to allow you to edit).

On the first sheet you can import your timed data (it doesn’t have to be *.csv, it only has to be readable by Google Spreadsheet), and then clicking ‘Subtitle Gen –> Timed Data to XML’ on the XMLOut sheet it generates and timed text XML.

Below is the main function which is doing most of the work, the comments indicating what’s going on:

function writeTTXML() {
var ss = SpreadsheetApp.getActiveSpreadsheet();
var dataSheet = ss.getSheets()[0];
var data = getRowsData(dataSheet); // read data from first sheet into javascript object
var sheet = ss.getSheetByName("XMLOut") || ss.insertSheet("XMLOut"); // if there isn't a XMLOut sheet create one
sheet.clear(); // make sure it is blank
// Start the XMLOut sheet with tt-XML doc header
sheet.getRange(1, 1).setValue("<?xml version=\"1.0\" encoding=\"utf-8\"?><tt xmlns=\"http://www.w3.org/2006/10/ttaf1\" xmlns:ttp=\"http://www.w3.org/2006/10/ttaf1#parameter\" ttp:timeBase=\"media\" xmlns:tts=\"http://www.w3.org/2006/10/ttaf1#style\" xml:lang=\"en\" xmlns:ttm=\"http://www.w3.org/2006/10/ttaf1#metadata\"><head><metadata><ttm:title>Twitter Subtitles</ttm:title></metadata><styling><style id=\"s0\" tts:backgroundColor=\"black\" tts:fontStyle=\"normal\" tts:fontSize=\"16\" tts:fontFamily=\"sansSerif\" tts:color=\"white\" /></styling></head><body tts:textAlign=\"center\" style=\"s0\"><div>");
var startTime = data[0].timestamp; // collect start time from first data row, all subsequent relative to this
for (var i = 0; i < (data.length-1); ++i) { // looping through all the data one row at a time except last line (excluded because have no end date/time
var row = data[i];
var nextRow = data[i+1];
row.rowNumber = i + 1;
//calc begin and end for an entry converting to HH:mm:ss format.
var begin = Utilities.formatDate(new Date(row.timestamp-startTime), "GMT", "HH:mm:ss");
var end = Utilities.formatDate(new Date(nextRow.timestamp-startTime), "GMT", "HH:mm:ss");
// prepare string in tt-XML format. Conent is pulled by ref the column header in normalised format (e.g. if col headed 'Twitter status' normalsed = 'twitterStatus'
var str = "<p style=\"s1\" begin=\""+begin+"\" id=\"p"+row.rowNumber+"\" end=\""+end+"\">"+row.name+": "+row.text+"</p>";;
// add line to XMLOut sheet
var out = sheet.getRange(row.rowNumber+1, 1).setValue(str);
}
var lastRow = sheet.getLastRow()+1;
//write tt-XML doc footer
var out = sheet.getRange(lastRow, 1).setValue("</div></body></tt>");
}

If your timed data has different headers you can tweak this by clicking ‘Tools –> Script –> Script editor …’ and changing how the str on line 18 is constructed.

I’m the first one to admit that this spreadsheet isn’t the most user friendly and it only includes the tt-XML format, but hopefully there is enough structure for you to go, play and expand (if you do please use the post comments to share your findings)

Searching the backchannel with Twitter subtitles

Pair programming is an agile software development technique in which two programmers work together at one work station. One types in code while the other reviews each line of code as it is typed in. The person typing is called the driver. The person reviewing the code is called the observer (or navigator). The two programmers switch roles frequently (possibly every 30 minutes or less). From Wikipedia

Regular followers of the twitter subtitle story will be aware that this idea has been bouncing back and forth between myself and Tony (here are some of his posts). While we don’t have a true ‘pair programming’ relationship the dynamic is very similar. So when Tony posted a method for deep search linking a twitter caption file using Yahoo Pipes it was time to hit the driving seat for some evening coding.

Using the other Martin’s presentation again I’ve put together this page which demonstrates twitter caption search and timecode jump (I should point out that limitations of the JWPlayer means jumps can only be made to portions of the video which have already been buffered).

Twitter subtitle - search and timecode jump

How it was done

Taking the JWPlayer used in the previous post I dropped it onto a page also pasting the subtitles from the XML file. With a bit of CSS styling and using A K Chauhan’s JavaScript List Search using jQuery the pasted xml can be filtered, and using the JWPlayer JavaScript API you can jump to the related part of the video. When I get a chance I’ll integrate this functionality into the twitter subtitle generator. Update: Breaking my ‘no coding in office hours’ rule this feature in now enabled for the ‘YouTube with Tweets’ option of the twitter subtitle generator

Some thoughts

Historically one of the issues with audio/video content is the ability to search and deep link to content. This is changing most notably with Google/YouTube’s auto captioning of videos, but as Tony pointed out in his last post there is still some ways to go. Providing a contextualised and searchable replay of the backchannel with what was actually said potentially opens up some interesting uses. With a number of universities exploring the use of lecture capture there is potentially an opportunity to enrich this resource with the backchannel discussion. In particular I’m thinking of the opportunity for students to learning vicariously through the experiences and dialogue of others. Before I go all misty eyed the reality check is twitter isn’t that widely used by students (yet), but surely this is a growth area.

What I’ve starred this week: April 20, 2010

Here's some posts which have caught my attention this week:

Automatically generated from my Google Reader Shared Items.

JISC10 Conference Keynotes with Twitter Subtitles

Last week saw he return of the JISC conference. As with other similar events the organisers explored a number of ways to allow delegates to experience the conference virtually as well in person. The main avenues were video streaming some of the sessions live across the web; the inclusion of a Ning social network (I’m guessing they won’t be doing this again next year. See Mashable’s Ning: Failures, Lessons and Six Alternatives); and advertising the #jisc10 hashtag for use on twitter, blogs etc. I would recommend Brian Kelly’s Privatisation and Centralisation Themes at JISC 10 Conference post which presents some analysis and discussion on the effectiveness of each of these channels.

It is apparent that the JISC conference mirrors a wider emerging trend to allow dispersed audiences to view, comment and contribute to live events. A recent example is that of the #leadersdebate broadcast on ITV, which as well as having over 9.7 million views generated over 184,000 tweets (from tweetminster.com) and numerous other real-time comments on blogs and other social network sites.

I didn’t have a chance to attend the conference myself and other things meant I was unable to see the live video streams, although I was able to keep an eye on the twitter stream. Fortunately the conference organisers have made thevideos of the keynote speeches by Martin Bean and Bill St. Arnaud available. It is however difficult to replay the video with the real-time backchannel discussion. Cue the twitter subtitle generator, which I’ve been exploring through various posts. So if you would like to experience the live video/twitter experience some I’ve embedded the videos below.

Opening Keynote: Martin Bean, Vice Chancellor, The Open University

This text will be replaced
Subtitle content provided by twitter | Download the XML subtitle file

Closing Keynote: Bill St. Arnaud, P. Eng. President, St. Arnaud-Walker and Associates Inc.

This text will be replaced
Subtitle content provided by twitter | Download the XML subtitle file

Here are Martin Bean’s and Bill St. Arnaud’s biographies and keynote slides. Both of the video’s were produced by JISC and distributed under Creative Commons.

Just a quick couple of words on the subtitle file generation. I had planned to use the archive of tweets provided by Twapper Keeper for both keynotes, but there was a 45 minute hole in the archive between 08:44 and 09:27GMT for the first session, which is being investigated, so I used the Twitter Search instead. As the session was early in the morning and twitter limits searches to 1500 tweets I had to modify the query to ‘#jisc10 -RT’, which removes retweets, to get results for all of Martin Bean’s presentation (he still has a healthy 372 original tweets during the course of his presentation. [There is perhaps an interesting way to visualise RT's in the subtitle file to indicate consensus tweets - for another day]

If you are planning to run your own event and would like to create a twitter video archive here are some basic tips:

  1. Make sure you advertise a hashtag for your event
  2. Before the event create a hashtag notebook on twitter archive service Twapper Keeper – there are other archive services but currently the subtitle tool only integrates with this one
  3. Make sure video is captured in a reusable format. The video above is played back with the JW Flash Video Player which supports FLV, H.264/MPEG-4, MP3 and YouTube Videos. Generated subtitle files can also be used directly in YouTube (if you own the video). I’ve also experimented with Vimeo for longer videos.

If you would also like a ‘at the scene’ report of the keynotes and some of the plenary sessions you should read this post by my colleague Lis Parcell at RSC Wales - Technology at the heart of education and research: JISC10 conference report


A real-time collaborative education with Google Docs

At the beginning of the year in the ‘A real-time education (etherpad, mindmeister and cacoo)’ I made a prediction that there would be a lot more real-time collaboration tools/web applications. At the time I mentioned that the real-time collaborative word processing tool Etherpad had been bought by Google and the original etherpad.com would be disappearing at the end of March. The reprise has been extended as noted in this Transition Update and as the code is open source a number of other sites have popped up which clone etherpad’s original functionality.

It looks like Google have deployed the old Etherpad team to take a look at the Google Docs suite recently announcing a number of improvements to Google document and spreadsheet, which includes character-by-character collaboration. Another big addition is a new standalone collaborative drawing editor. This is similar to Cacoo but without as many features. The big advantage of Google’s editor is it is a single sign on and allows pasting into other Google Docs using the  web clipboard. One final addition worth mentioning is the extension of the sidebar chat from spreadsheet to documents and drawing. The video below gives a demonstration of the new features.

I’m sure a number of institutions who have already signed up to Google Apps will be rubbing their hands with the addition of this functionality to all their students. If you would like to try out these new features yourself Google are phasing the changes in but you can opt to start using them now by opening selecting http://docs.google.com/settings,clicking the Editing tab and checking ‘New version of Google documents’.

A final thought, and one I’m sure a number of commentators have picked up on, with Google’s work in indexing the web, print/journals (Google Books and Scholar) and even video/audio (YouTune auto captioning) it seems it is only a matter of time before they integrate plagiarism detection tools into Google Docs.

The need for speed: Tuning up to keep your students (and Google) happy

The need for speed
The need for speed
Originally uploaded by toastforbrekkie

Google recently announced that it is using site speed in web search ranking and while the weighting of this metric is slight (less than 1% of search queries will be affected) it is still good practice to make sure your web resources are optimised.

I was a little shocked to discover MASHe didn’t fair particularly well on speed tests. Inherently, self-hosted WordPress blogs give you a lot of flexibility in how you configure your blog, making endless tweaks to its appearance and available functionality via plugins. The cost of this flexibility is you can quickly turn your site into a quagmire of extra coding slowing down page loading times. You are also reliant on your server configuration being correctly optimised. This post documents what I discovered and how I fixed it.

Diagnosis

Having already signed up to Google’s Webmaster Tools I was able to check the Labs –> Site performance and was a little shocked to see “your site take 6.9 seconds to load (updated on Apr 2, 2010). This is slower than 83% of sites

My first step was to diagnose where I was loosing time. The site performance results give you some pointers but these aren’t real-time and I wanted a way to make sure I was heading in the right direction. I chose to download Google’s Page Speed and Yahoo’s YSlow. Both these tools run a barrage of tests on a web page, highlighting area’s where you can make improvements.

Fixes

Because performance of self-hosted WordPress blogs is a known problem there are a number of plugin’s available to optimise performance. Previously I had been using  WP Super Cache  but as the site performance data has shown there is perhaps more I could be doing, so I switched to W3 Total Cache which has some nice features. The key words to look out for when optimising websites are page caching, server-side gzip compression, content delivery network (CDN) integration (also known as parallelizing) and minifying.  

Just to expand on a couple of these:

Server-side compression – the basic idea is a requested webpage is compressed at the server before being sent to the user reducing the bandwidth required. A number of 3rd party hosts don’t enable this feature presumably because of increased processing load on their servers. So despite my best efforts I was unable to use server-side compression.

CDN -  because there is a limit to the number of page elements the browser can download at one time, distributing assets across hostnames allows items more items to be downloaded simultaneously. A quick fix for me was to use our host providers control panel to create a sub-domain http://mashe.img.rsc-ne-scotland.org.uk which mirrors my existing directory structure. W3 Total Cache then allows you to choose the type of files to server from the different domain practically allowing you to doubled the number of page elements downloaded simultaneously.

Minifying – is the technique of compacting and sometimes merging different HTML, CSS and JavaScript elements. Compaction is achieved by removing additional whitespaces, line breaks and code comments. Whilst W3 Total Cache has minifying features it didn’t like my CSS and JavaScript so I used a separate WP Minify plugin.

The results are looking reasonably promising, pages that have been cached loading in 1.5-2 seconds. I’ve also gone from grade F on YSlow to grade B/C. The problem I’ve still got is pages that haven’t been visited for a long time (not being cached, or having to be re-cached) taking 20 seconds to load. I’ll perhaps come back to this another day unless anyone has some immediate suggestions. In the meantime I need to get back to posts on enhancing teaching and learning with technology ;-)     

What I’ve starred this week: April 13, 2010

Here's some posts which have caught my attention this week:

Automatically generated from my Google Reader Shared Items.

What I’ve starred this week: April 6, 2010

Here's some posts which have caught my attention this week:

Automatically generated from my Google Reader Shared Items.

About

This blog is authored by Martin Hawksey e-Learning Advisor (Higher Education) at the JISC RSC Scotland N&E.

JISC RSC Scotland North & East logo

If you would like to subscribe to my monthly digest please enter your email address in the box below (other ways are available to subscribe from the button below):

Subscribe to MASHe to monthly email updates

Loading...Loading...


The MASHezine (tabloid)

It's back! A tabloid edition of the latest posts in PDF format (complete with QR Codes). Click here to view the MASHezine

Preview powered by:
Bluga.net Webthumb

The MASHezine (eBook)

MASHe is also available in ebook and can be downloaded in the following formats:

Visit feedbooks.com to manage your subscription

Archives

Opinions expressed in this blog are not necessarily those of the JISC RSC Scotland North & East.

JISC Advance Logo

JISC Advance is a new organisation that brings together the collective expertise of established JISC services:

For further information visit www.jiscadvance.ac.uk

Creative Commons Licence
Unless otherwise stated this work is licensed under a Creative Commons Attribution-ShareAlike 2.5 UK: Scotland License