Monthly Archive for January, 2011

Linking a Google Form with data from the responses in the Spreadsheet [Event/Resource Booking]

is there a way to use a Google Form for event booking and cap entries for individual parallel sessions?

This was the question I got from my colleague Kenji Lamb who is helping organise Game To Learn: Take 2!. The event has a number of parallel sessions with limited spaces in each. Ideally Kenji wanted a booking form which would track numbers and prevent people booking into sessions which were full. So is it possible? The quick answer as demonstrated by the existence of the registration form is yes, but there was some head scratching along the way (and there is one very big caveat at the end of this post). Here’s how I did it:

Step 1: Getting live booking numbers out of Google Spreadsheet

The Google Visualization API provides a way to access spreadsheet data using their query interface (I’ve used this previously in gEVS – An idea for a Google Form/Visualization mashup for electronic voting). The problem is to do this you need to make your entire spreadsheet publically available, okay for anonymous voting but isn’t ideal if you are collecting personal information like email addresses and phone numbers. Here’s what I picked up from the documentation:

non-embedded visualization runs with the privileges of the person viewing the visualization, so the spreadsheet must either assign view access to everyone, or to the specific person running the visualization from Google Visualization Docs

The other option is to ‘Publish as a web page’ selecting a single sheet. There are a number of different formats you can choose for the data including CSV (comma separated values). I’ll come back to what you can do with the CSV data in a second.

So we have a way of getting some data out but it needs to be filtered to leave the bit you need – how many places are left. My solution was to use a series of sheets and functions to filter the data. In this spreadsheet there are 5 sheets:

  • Form Raw – this is where the form data comes into
  • Parsed Data – this strips out extra session title/day info text using
    • =IF(ISERROR(FIND(“17th May”, ‘Form Raw’!C2)),”-”,”X”) – for day; and
    • =LEFT(‘Form Raw’!D2,4) – to extract the first 4 digit identifier
  • Session Counts – counts the numbers of things using COUNTIF (I could have skipped this sheet and just gone to the free spaces but wanted an easy way for admin to tweak numbers)
  • Public – the bit were most interested, how many spaces are left
  • Templates – used for sending out email confirmation to delegates

Initially I prefilled Parsed Data with functions in the cells but for some reason when the form was being submitted it would remove the function from the equivalent Parsed Data row. Consequently I had to write an Apps Script which would programmatically insert the function (I was going to use Apps Scripts anyway to automatically send an email confirmation when someone booked. I’m not going to go into the detail of the App Script in this post, the code is in the shared spreadsheet (if you need clarification on anything use the comments below).

Step 2 Snaffling your Google Form

So we have a public sheet which has a summary of the available slots, how can we use this in the associated booking form? There is very little you can do with the form has hosted by Google, but if you know your away around html and have somewhere to host your alternate version it’s easy enough to style Google Forms by viewing your live web form then copy the source to another webpage.

Step 3 Converting CSV to something more useful instead (PHP/JavaScript)

Initially I tried grabbing the .csv data from Google using jQuery but kept getting access denied errors, so instead I resorted to PHP. Here’s my PHP code snippet. PHP handles .csv well, and it would have been easier if my host used 5.3> because I could have used str_getcsv() – there’s a workaround I modified to use the file version of this function.

I could have used PHP to dynamically write the form indicating if there were any filled sessions, instead I opted to use jQuery as that sort of thing is a lot easier. First converting a PHP array into a JavaScript object (got this trick from here):

<script type="text/javascript">
    var list=<?php echo json_encode($data); ?>;
</script>

Step 4 Updating the form

Next disabling any of the parallel session select options that are full:

function checkForm(){
$('select > option').each(function() {
        var str = $(this).text();
        var id = str.substring(0,4);
     if (parseInt(list[id]) <= 0){
         $(this).attr('disabled', true)
         $(this).text("[FULL] "+str);
     } });

Because Google do form validation server side to prevent the user submitting the form with missing data and being redirected back to an unmodified version of the form you need to validate before submission. Fortunately jQuery has a Validation Plugin which makes it easy to do validation client side. All you need to do is include

<script type="text/javascript"
    src="http://ajax.microsoft.com/ajax/jquery.validate/1.7/jquery.validate.min.js"></script>

in the head of the page then initialise validation with:

$("#yourFormName").validate();

To make parts of the form required just add the class=”required” to the form elements.

Some other things you can do in Google Spreadsheet (I did not know that)

First off Google have added some nice email notification options to Google Spreadsheet. These let you setup immediate or digest notification for changes to the spreadsheet or other events.

image

Second thing I discovered is that Apps Script are by default only editable by the person who first added it to the spreadsheet (for others who open the script editor they can see the script but there is a little padlock next to the script name). To allow any of the spreadsheet collaborators to edit you need to go into Tools > Scripts > Script Editor > Share > Share settings.

Third thing, if you set up a script trigger that includes sending emails they will be addressed as coming from the account of who ever setup the trigger. I decided to spare myself queries from Game To Learn delegates to my personal gmail account by sharing the spreadsheet with one of our catchall email addresses, then using this one to create the email trigger.

The big caveat

As the public data like shown on this page is only updated every 5 minutes there is a chance that the session could go over quota if bookings are made in quick succession. If anyone knows a way around this I would be very grateful ;-).

Finally if you want to have a go I’ve setup an example form here. You can see how the validation works and if you select session ABC1 and wait 5 minutes before refreshing the page ~:-S you can see what happens when a session is full (for this example the spreadsheet keeps adding an additional space every 10 minutes)

Something I wrote almost 10 YEARS AGO on 3D display technology

Back in 2000/2001 I was studying for an MSc Multimedia and Interactive Systems. At the time I was quite interested in 3D technology and the birth, death, birth, death, birth, death of the 3D web. Digging around some old files the other day I came across a piece of work I did with my long standing friend Drummond Cargill on ‘3D Web Interfaces – The Next Dimension?’. Below is the section I wrote on hardware. Amazing how little has changed in 10 years (other than increased availability/lower cost).

Hardware

Current display and input devices available to the general public have remained unchanged for a number of years, but as the IT industry in general heads towards a more enriched 3D environment new methods for viewing and navigating these world are being developed.  While many or these new devices remain in the domain of specialists there has already been a diffusion of these new technologies into the home in countries like Japan and North America.  These new output devices required to view and navigate virtual environments can be divided into two broad categories, display and navigation. The following section investigates the hardware available in both of these categories.

Display

All current display techniques use the same method to deceive the brain that it is looking at a 3-Dimensional object.  This method is based on giving a different data to the left and right eye and at present there are four basic ways to achieve this effect: 

Liquid Crystal (LC) glasses for video and computer monitors.

AnotherWorlds LC Glasses, Another Eye 2000  £65.

Liquid Crystal (LC) glasses or Shutter glasses are used to view Stereo3D on video and computer monitors. They are inexpensive and provide viewers with full-colour images. LC glasses can be used on standard CRT monitors but if the refresh rate is 60 Hz there is a noticeable flicker due to the low frame rate, this is however not a problem for 120 Hz monitors.  LC glasses are emerging as a future technology for the home because of there low cost and the fact that they don’t use a dedicated system to display images.  LC glasses range in price from £50 - £350 and are already directly support by software packages such as 3D Studio Max.

Polarized glasses for images projected on large screens and specially equipped computer monitors.


VRex VR-3100 Projector and Polarised Glasses £7,500.

Polarised glasses are used to properly view 3D objects from projections and specially equipped VGA monitors. Inexpensive polarized glasses are available in both plastic and paper frames and may be imprinted with logos, promotional text or other graphics.  Polarised glasses are more commonly used for large audiences and trade fairs.  Although polarised glasses cost between £0.30 - £3.50 projectors range in price from £5,000 - £10,000 and adaptations of standard CRT monitors using liquid crystal plates ranges from £1,500 - £5,000.  Polarisation is a techniques commonly used in theme parks such as Disney and IMAX to achieve 3D effects because of it’s accessibility to a wide audience and relatively low cost.

Interactive Imaging Systems VXF3D Headset £1,250.

Head-Mounted Display (HMD) devices are similar to Liquid Crystal glasses except the each eye has a dedicated liquid crystal display to generate the Stereo3D image removing the possibility of flicker. HMD’s often also incorporate audio headphones to give a complete immersive experience.  HMD devices although considered the way forward in the early nineties are considered expensive and cumbersome compared to LC glasses.  This has lead to many major manufactures such as Sony and I-O Display Systems to discontinue lines.  HMD headsets which including tracking devices have found use in total Virtual Environment immersion applications.  HMD headsets range in price from £350 - £5,000.

Dimension Technologies 18.1" DTI 3D flat panel display £5,000.

Auto-stereoscopic flat-panels use parallax illumination which involves sending two images – one to the left eye and one to the right eye - to different columns of pixels, the left eye images to the odd numbered columns and the right eye to even numbered columns. The LCD display has a standard arrangement of LCD backlighter and the LCD panel but Auto-stereoscopic panels have an additional panel in between called a TN panel.  The vertical columns on the TN panel illuminate either the even or odd columns of pixels, depending on which image is coming through. Your left eye sees only left eye images and your right eye sees only right eye images, just as you do in real life.  Auto-stereoscopic panels do require you to sit in an optimum position relative to the panel to get a true 3D effect but they have the advantage of not requiring cumbersome headgear or glasses which may lead to health issues.  Auto-stereoscopic flat-panels range in price from £1,200 - £10,000 and price is very sensitive to viewable area and addition function such as eye tracking.

At present true 3D display is very feasible and is on the verge of entering the home market.  Applications for Stereo3D remain limited but with the increasing desire for the total home entertainment system it is only time before minimum specifications for personal computers will include a form of 3D output.  Having a method for displaying Stereo3D images is only the first step in the problem. To utilise this new hardware development fully 3D objects and worlds have to be designed with this new technology in mind and at present very few software companies, except those dealing with computer aided design and animation, are considering the potential of this output medium while it is in such an early stage of consumer acceptance.  But with Microsoft’s continual development of TaskGallery, a 3D desktop, Steroe3D imaging appears to have a strong future. 

What I’ve starred this month: January 28, 2011

Here's some posts which have caught my attention this month:

Automatically generated from my Google Reader Shared Items.

Learning and Knowledge Analytics (LAK11) Week 1

So we are now into week 2 of the open course in Learning and Knowledge Analytics LAK11. Whilst I’m already doing better at this course than PLESK10 I would still only class my involvement as periphery participation so I’ll be no doubt be revisiting the LAK11 syllabus again at a later date. A couple of things I’ve picked up from week 1 you might be interested in:

The only paper I had a chance to properly read was Elias, T. (2011) Learning Analytics: Definitions, Processes, Potential. It was more luck than anything else that I started here but I was very glad of the fortune [it was only later that I read Dave Cormier’s MOOC newbie voice - a slackers entrance into lak11 post which reassured me that although I wasn’t doing much at least it was the right thing].

Things I took away from the paper were:

  • Some examples of learning analytics systems already being used.
    • Purdue’s Signal’s block for Blackboard“To identify students at risk academically, Signals combines predictive modeling with data-mining from Blackboard Vista. Each student is assigned a "risk group" determined by a predictive student success algorithm. One of three stoplight ratings, which correspond to the risk group, can be released on students’ Blackboard homepage.”  [this reminded me of University of Strathclyde’s homegrown STAMS VLE which appears to have disappeared when the University moved to Moodle – bit of a shame as it was developed by staff in Statistics and Modelling Science so imagine behind the scenes it had a dusting of analytics – that’s progress for you]

    • University of California Santa Barbara’s Moodog Moodle module - “In addition to collecting and presenting student activity data, we can proactively provide feedback to students or the instructor. Moodog tracks the Moodle logs, and when certain conditions are met, Moodog automatically sends an email to students to remind them to download or view a resource.”  Zhang (2007) (p. 4417) [I was a little disappointed to only find references to this is academic papers]
  • Something on collective intelligence - Woolley et al. (2010) identified the existence of collective intelligence which “is not strongly correlated with the average or maximum individual intelligence of group members but is correlated with the average social sensitivity of group members, the equality in distribution of conversational turn-taking, and the proportion of females in the group" (p 686)
  • Some terminology/theories for recommendation systems – “recommendation methods based on different theories such as collaborative filtering algorithm, bayesian network, association rule mining, clustering, hurting graph, knowledge-based recommendation, etc. and the use of collaborative filtering algorithms (Cho, 2009)” [at this point in the paper I thought about Tony Hirst’s Identifying Periodic Google Trends posts, mainly in underscoring the shear scale of the field of learning analytics]

Overall the paper was very useful in highlighting how much I didn’t know, but was an indication of the things I might need to know [whilst it might not sound like it this is a positive outcome to let me self-regulate my learning].

Some things on participating on the course in general

Google Reader 'magic'There were other things I did during week one including playing the the recommendation search engine hunch. This experience was juxtaposed to the course moodle site, which was blindly sending me hundreds of emails from the course discussion forums. In the end I decided to unsubscribe to the email notifications and pull the forum into Google Reader via RSS. My hope was Google Reader would ‘sort by magic’ to pull interesting things to the top, but the algorithm is struggling to do anything other than chronologically order the feed [my guess is Google don’t have another data for my personal or group preferences – ho hum ;)]

Where are you coming from: Search referrer and contextual related post/information

Just before Christmas Brian Kelly wrote a post on Trends For University Web Site Search Engines, which gives an overview of which search engines [Edit: Russell Group*] universities are using on their websites (75% using Google products) . This combined with my interest in Learning Analytics (plug: it’s week one of the open course LAK11 Learning and Knowledge Analytics), got me wondering how many institutions uses information about their visitors to customise content. There are a number of ways you could potentially do this from using media campaigns to using one of the Facebook Social plugins.

*Brian Kelly has kindly pointed out this was a survey of Russell Group institutions and all all universities as originally implied

The question of what is already being used is probably best answered by someone else, like Brian, instead I’m going to highlight one other simple way that you might customise the visitors experience (and at the same time improve how information on this blog is presented), by using Search Referrer information.

For the majority of users when they navigate around the web they leave a ‘referrer’ trail. When they land on a page that site can usually see where the person came from (if you read the HTTP Referrer entry on wikipedia you’ll see why this information isn’t always available). Monitoring tools like Google Analytics can track referrer information so that you can track where your traffic is coming from. I use Google Analytics to monitor traffic to this site and whilst I don’t use this data extensively (although I did modify the Google Analyticator WordPress plugin to display top posts based on Analytics data), it is useful information to check the general health of my blog.

When monitoring referrer information it is possible to record the whole web address of the page with the click through link including the query string (junk at the end). For example, if you were to open this page http://www.google.co.uk/search?&q=jisc+rsc+mashe and click on the link for this blog, if your browser is passing referrer information I can see you got here from a Google Search for ‘jisc rsc mashe’.

In fact I know from my Google Analytics data that in 2010 over a third (37%) of my visitors arrived via a search engine, almost all (97%) using Google and as the table shows I even know the main search keywords used.

Table of top search keywords used in 2010

So if I know over a third of people end here having searched for something, wouldn’t it be good if I could highlight more of my content based on their search? Hopefully our answer is yes otherwise I’ve wasted a hell of a lot of my own time chasing my tail on this.

Rather than completely reinventing I had a quick look at some existing WordPress plugin’s to see if anything would do this. The one that came closest was WP Greet Box which as well as providing a custom greeting message based on where your visitor is coming from if the user comes from a search engine it uses that search query to optionally display some related posts.

The problem I had with this solution is, as well as thinking the greeting was a little tacky, it only suggests related posts. As this blog has evolved I have more bespoke pages and tools which is why I use a Google Custom Search Engine (CSE) (combined with some of my own ‘instant’ magic to let people search all of the material in the MASHe directory.

Finding nothing else and as I already use the Contextual Related Posts plugin I thought it would be fun to modify this so that if someone lands on one of my posts from a search engine, their search query is used to pull related material using my sites Google CSE.

Now using my modified contextual-related-posts.php if you go to this TwEVS post the related post information at the end remains the same containing links for:

You also might like :

But if you end up at the same page by for example clicking a link to it from this Google Search you get the following instead:

You also might like (based on your search for ‘twevs’):

So what do you think?

Personally I’m not entirely convinced that this will have any impact on driving traffic internally within this site because my volume of visits is relatively low and the placement at the end of the post isn’t optimum for leveraging extra content. The other factor is normally in posts if I’ve written or have a related tool that might be of interest to the reader I include a link in the body of the post. But if nothing else maybe you’ve learned a bit about referrer information and you’ll come up with a better idea than me.

[A couple of other ‘techy’ things I learned along the way worth sharing:

Forthcoming JISC Supported Events in Scotland

There are a couple of events in the next couple of months supported by JISC I thought worth highlighting. 

Open Edge: Open Source in Libraries
25-26th January 2011, e-Science Institute, 15 South College Street, Edinburgh

This free two day event on open source software for libraries is being run in collaboration with JISC and SCONUL. The first day is ’Haggis and Mash’, a Mashed Library event, while the second day covers broader issues, in particular how capacity might be built to enable open source solutions to flourish in HE and FE Libraries. Delegates can choose to attend either or both days of this event.

More details and booking: http://www.nesc.ac.uk/esi/events/1114/

______________________

Netskills Workshops

Netskills is a JISC service based in Newcastle, but all of the following take place in Edinburgh, in a good quality training central venue only a few minutes walk from Waverley Station!

If Netskills receive your reservations before 10th Jan, you will qualify for a VAT-defying reduction of £32 (ie, a workshop place is £128 instead of £160).

* Thu 24 Feb: CSS: A Complete Web Style Toolkit
* Fri 25 Feb: Writing for the Web
* Mon 28 Feb: An Introduction to Instructional Design for eLearning
* Wed 9 Mar: Collaborative Tools to Support the Learner Experience
* Thu 10 Mar: Exploring Digital Storytelling
* Wed 16 Mar: Database Design and SQL

______________________

Enhancing Business Performance: The Role of Technology in Developing Skills and Knowledge
21st February 2011, e-Science Institute, 15 South College Street, Edinburgh

This free one-day conference is being organised by the JISC RSCs in Scotland in partnership with The Higher Education Academy Scotland team to explore how institutions are turning increasingly to technology to provide Business and Employer Engagement solutions. The conference itself will include a variety of sessions exploring current issues, practical resources and advice in the areas of Business and Employer Engagement. Some of the themes to be explored in the conference will include: Technological solutions: opportunities and challenges Developing the internal strategy: Communication, culture, recognition and buy-in Relationship management - what are the benefits/the costs?: People and processes Professional development and skills for engagement Student Employability: skills and attributes learners should expect to develop.

More details and booking: http://www.nesc.ac.uk/esi/events/1127/

About

This blog is authored by Martin Hawksey e-Learning Advisor (Higher Education) at the JISC RSC Scotland N&E.

JISC RSC Scotland North & East logo

If you would like to subscribe to my monthly digest please enter your email address in the box below (other ways are available to subscribe from the button below):

Subscribe to MASHe to monthly email updates

Loading...Loading...


The MASHezine (tabloid)

It's back! A tabloid edition of the latest posts in PDF format (complete with QR Codes). Click here to view the MASHezine

Preview powered by:
Bluga.net Webthumb

The MASHezine (eBook)

MASHe is also available in ebook and can be downloaded in the following formats:

Powered by NEWSTOEBOOK.COM

Archives

Opinions expressed in this blog are not necessarily those of the JISC RSC Scotland North & East.

JISC Advance Logo

JISC Advance is a new organisation that brings together the collective expertise of established JISC services:

For further information visit www.jiscadvance.ac.uk

Creative Commons Licence
Unless otherwise stated this work is licensed under a Creative Commons Attribution-ShareAlike 2.5 UK: Scotland License