Tag Archives: technology

How to Illustrate your results


When I was writing up my own PhD (in the antediluvian days before personal computers, desktop publishing software, or graphics packages!) I was given a very useful lesson by the Prof. who was my supervisor. I was agonising about how good my hand-drawn graphs and maps needed to be, how precise the individual, hand-printed, iron-on lettering needed to look. He informed me, rather drily, for that was his preferred style, that I was “… training to be a geologist, not a draughtsman!

From that response I understood, correctly, that if my diagrams are clear and accurate enough to convey my key point(s) then a point of diminishing returns is quickly reached on the time spent labouring over them. There is no need to produce a “work of art” – it is about “communication”. The situation is slightly different now, for there are lots of clever software packages, in Excel and elsewhere, which can quickly produce lots of impressive diagrams which can be “cut-and-pasted” into the text with minimal effort – but the two fundamental points remain the same. Firstly, if the initial data is weak and/or disorganised, then any resulting illustration is hardly worth the effort of trying to interpret with any degree of real meaning. As computer programmers are taught early – GIGO – (Garbage in, garbage out)! Secondly, a diagram (or a map, or a graph) needs to convey something meaningful. It is a visual expression of something that the author is trying to convey to the reader, so if this can be communicated clearly and simply, that is sufficient. There are far too many elaborate diagrams that are over-designed, and the result can appear so complicated that it is the diagram, rather than the results, that needs to be explained to the reader.

In some subjects, there are more-or-less standard conventions for diagrammatic representations, such as histograms, bar-charts, tolerance diagrams, or pie-charts. It usually makes sense to abide by these conventions because it can help comparison with similar studies elsewhere. Usually, simple is best. Let the eloquence of the diagram communicate the data for you. Sometimes, particularly due to the speed and ease with which computer-generated diagrams can be generated, there can be a tendency to “graph every variable against every other variable” in the hope that a stunning correlation is unexpected revealed. While this can happen, it is more likely that simply a blinding flash of the obvious is revealed, without contributing anything more than confusion to the current understanding of the topic. As with the use of statistics, it is always better if the author actually understands what they are trying to do before attempting the activity. It is too easy to drop into the text a “pretty photograph” or a diagram of a rather obvious feature without actually conveying much real information (e.g. a pie-chart of the male/female split of respondents; it is probably better just to give the percentage figures).

In a few cases, the use of a few clever diagrams, such as fishnet images of topography, or bar-chart information superimposed on a map to show geographical abundance, can produce a stunningly visual interpretation, but these should be used sparingly. While it is true that a (good) picture can say a thousand words, the tokenistic use of photographs, diagrams, or graphs can simply clutter up the main text, and require further text to explain the image to the reader. A good illustration actually says something clearly and makes a positive contribution to help the reader understand the accompanying text and data.


Storing and archiving data


When I was doing my own PhD, I had a filing cabinet with three or four drawers, and even then I had hundreds of photocopies of academic papers stacked in small piles according to theme and relevance to the section that I was writing about next. My raw research data, however, was compactly contained in electronic format in the form of tables and graphs; row after row of numbers on spreadsheets which could be tabulated and correlated in any format that I desired. When I left the department, the files were archived for a few years, and then I suspect they were all dumped when the department moved to another building on another campus.

Now, when I generate research data, it is almost entirely in electronic format, and it is automatically stored in several places. I have my personal space in the memory banks of the university computing system, and this space is automatically backed-up overnight. I also usually back-up to my own cloud-space, so that I can access the data wherever and whenever I want. Usually, I also store data for individual projects on a separate memory stick or portable hard-drive. The digital age means that after two or three clicks, I can be assured that copies of my data are safely held in four or five independent locations. Research students can simultaneously share data with a colleague or supervisor in a different part of the world without even leaving their own desk.

This is only the tip of the iceberg, however, because the production of digital data raises almost as many questions as it provides innovative opportunities. There needs to be an early discussion in the supervisory team, for instance, about not simply which data will be stored, but where will it be stored, for how long, and who will have access to it? This is not simply an issue of security, although security, confidentiality, and appropriate use of the data will certainly figure in the discussion. There is a growing awareness that when public money is used to fund research, there needs to be a transparent return on public interest. Initially this has meant that research results, reports, and journal articles, should be made freely available to the public. This is being extended in the next Research Excellence Framework in the UK to insist that if the journal article is not already published as an open resource, it needs to be added as an open source on the digital repository of the relevant institution. But there’s more.

The argument has been extended to include the research data generated by the public funding, so the datasets themselves are trending to become open and shared property. Whether the data is numbers, interviews, audio recordings, photographs, or other recordable results, the likelihood is that the data being gathered by a researcher today, is probably going to be a shared resource tomorrow. It will be possible for other researchers, in subsequent years, to access your raw data, perhaps combine it with other raw data, and re-analyse, re-interpret, and publish their conclusions. It now begins to matter a great deal more seriously exactly who can gain access to your research data, and for what purposes. As the law currently stands, a bona fide researcher can have access to open datasets for up to ten years after they have been deposited. But here is the catch – if a researcher accesses this data after nine years, the open-access clock is automatically re-set for a further ten years. This ensures the certainty that data which is being collected and digitally stored just now, might be still openly available long after the initial researcher has moved on from that research topic, perhaps changed institutions, changed careers, maybe even passed away. The raw data of open access digital resources is now guaranteed a lifetime longer than the career-span of many individual researchers. So think carefully about what you gather, how you organise and store it, and what your legacy of research data will be!

What methods will help to answer the research question?


This is where it gets hard, not simply because the research student is venturing out into the unknown, but also because selecting the methods through which the research will be conducted will differ hugely between cultures, between disciplines, and between subjects within disciplines. There is no one-size-fits-all template which will allow a pick-and-choose approach to selecting the most appropriate methods. In one sense, this is an easy step, because it will probably be pretty obvious from the outset what methods will be needed in order to answer the research question(s). Almost all academic research methods will involve reading, either to follow-up on what has already been said about the topic or to put it into a wider context. After that, the methods might include interviews, experiments, observations, questionnaires, focus groups, and a host of other activities which will change in emphasis from discipline to discipline. Getting the “correct” mixture of these methods is what will determine the methodology, that is, the system of methods for further research.

Here is where high technology can come in. I say “high” technology because even using a pen-and-paper or driving a car to conduct an interview is using technology, but of course we generally mean computer-based technology. In educational circles you will frequently hear the assertion that “the technology should never lead!”. This is certainly true, to an extent, but not entirely. For instance, if there are two (or more) ways to record research data, and one way entails using a high-technology solution which makes it easier, more flexible and/or more secure, then surely most sensible people would vote for the use of the technology. Examples might include, the use of RefME to compile the dissertation reference list and store it on the cloud; using Mendeley to store the articles online; the use of SurveyMonkey to conduct a questionnaire online rather than face-to-face, giving time-flexibility, wider geographic coverage, and the ability to utilise automatic data analysis and presentation tools; the use of a free voice-recorder smartphone app to record interviews… The list could go on and on.

A crucial factor in all of this is to consider carefully – right at the start – how these methods will allow you to analyse and hopefully make sense of the data which will be gathered. It makes little sense jumping off a high-point without knowing, even approximately, where you might land. Similarly, it makes little sense to gather mountains of data without any ideas how to begin to make sense of it. The supervisor should be able to give some clear directions, but ultimately each situation, each carefully worded question, is slightly different, and will have different constraints on time, resources, and abilities, so the student will need to be fully comfortable with the methodology before even starting the research. Prior studies in a similar area can help to provide some direction, but the precise mixture needs to be decided for each individual research project.



One of the really good things about being a writer is that there is a written record of your ideas. I was watching a video clip this morning on the Professorial lecture by Linda Creanor at GCU and I was struck in her short review of “Learning and Technology – evolution or revolution” how far we have come. In a book that I co-wrote about ten years ago called “The Connecticon: learning for the connected generation” we explored the enhanced ability of using digital networks to connect with people and share ideas. We called this “hyper-interactivity” and though “social networks” were not really on the radar to the same scale as today – networks certainly were. It’s the inter-activity that stimulates learning. I think there is a fundamental difference between talking about a “networked generation” versus “digital natives”. We are all able to join the “networked generation” (even if we are ‘the older generation’ 🙂 ) but the idea that people are dropping out of the womb with an in-built ability to use digital networks effectively just does not stack up. I liked Linda’s mention of the use of “animateurs” to facilitate connectivity in digital networks (I was a big advocate of animateurs in the early 1990’s and trained many), and I also liked her comments on the changing perceptions of MOOCs and how they might interact with the institution. One of the criticisms sometimes levelled against digital resources is that they have a short shelf-life and the link is liable to vanish – but you know what, the same is true of the “traditional” printed media! Publishers normally run very limited print-runs of books these days, and as a result even some very good books go out of print very quickly. Unlike old library books, which seem to be pulped or sent to second-hand stores, the digital artefact has the ability to be permanently archived and permanently accessible, even if just for historical comparison. Check out Linda’s lecture here

Openness in networks

Openness in networks

I have had several fascinating conversations this week between people who would like to see online social networks used more fully in education and those who emphatically do not. My own view is that such networks can and should be used where appropriate to the learning tasks, but that students (and staff) will need training in order to understand what they are doing and to use the applications responsibly. I have heard some “fundamentalist” arguments that external applications such as Facebook, YouTube, Twitter etc should never be used for education (despite the well documented advantages for student support and engagement). I have also heard a totally gung-ho “bring-it-on” attitude that rushes to adopt every new technology. In my view, both are wrong! The benefits of the new wave of web 2.0 applications, apart from the networking potential, are the peer-to-peer co-creation of knowledge, and also the unpredictability of the network connections. Not for nothing has it been called “disruptive education”. While I do firmly believe that the ‘established’ educational system does need to be disrupted and encouraged to embrace the online innovation that is hitting every other sector of society, I am not in favour of incautiously experimenting on students. The way that we teach people to deal with new technology, such as online social networks, is not to ban it or smother it in regulations, but to work with students so that they learn the benefits and disadvantages and through this we all understand what constitutes “appropriate behaviour” . I was reminded of a wonderful educational quote that “We learn about democracy by acting democratically” and following on from this I adapted this great image from those social media known as ‘Wikipedia’ and the ‘Creative Commons’ to reflect on another misguided attempt to dictate how people should think and with whom they should network.

Visual presence

Visual presence

A day of floating around for me. I had forgotten that today is a local holiday and I had agreed to link with some colleagues for a videoconference to discuss some new degree provision. So I downloaded jabber onto my Mac at home and joined the meeting that way. A wonderful tool that lets me be in two places at once… but so much of the effectiveness of this media is due to the culture of use, as much as the affordances of the technology. We occasionally point out that the UHI accounts for around 52% of all the videoconferencing in Higher Education in the UK, and that we are the biggest single users of educational videoconferencing in Europe – but what real analysis of use do we make? There are good ways to use VC and bad ways, and either way, the etiquette of the medium is totally different from a face-to-face situation. The chairman needs to be more effective and inclusive, and participants need to be more succinct and more aware of other users than in f2f. The visual image is activated by the speaker, so if everyone tries to speak at once, or talk over another speaker, the effect is chaos. If the meeting drags on badly, the time slot will disappear and will cut off even the most important speaker in mid-flow. Yet the ability to conduct important business in your own time-space, without needing to spend hours traveling to and from a distant location, is excellent. The meeting can be recorded for the record (or archive) and the immediacy of the medium can be conveyed so much more powerfully than phone, or email, or even dare I say than face-to-face (because we are forced to actually listen and look at the speaker). So why do we not insist on adequate training for users of videoconferencing. I think regular UHI users are probably on average better than colleagues at other institutions who use VC infrequently, but why do we assume that participants can ‘pick up’ the techniques of use – the etiquette, the body language, the technical skills – without some training? We are committing a grave mistake if we think that what we do f2f can simply be translated directly across to VC, but I am convinced that with appropriate training and practice, the results, and the benefits, can be far greater than we currently recognise.



I had several interesting conversations today, all revolving around networks and learning. One of my PhD students made an excellent submission on e-learning, but (I think) pulled her punches on a critique of Connectivism because she assumed that I am an advocate of this. Despite being a supervisor for George Siemens PhD, I am rather agnostic on Connectivism as an educational phenomenon. I know it may seem heretical to some of my colleagues, but I think that Connectivism, though very plausible, just lacks that final … well… connection! In a book several years ago, Robin Mason and I tried to capture the workings of networked, connected learning in a book called “The Connecticon”… it was perhaps rather presumptious for its time, but we thought (and I still do) that the process of online learning can be broken down to three basic levels… 1) the digital, computer-hosted resources; 2) the network and infrastructure of the internet that can link these resources at great speed ( we called this hyper-interactivity); and 3) the humans at each end of the network connections, who absorb, process, and act upon these transmitted resources. They (the humans) will act differently according to their abilities, experience, and cognitive capacities. This (to a large extent) is the basis of situated learning and of social constructivism. Responses in less than 100 words on the blog reply please! 🙂