This is the second in a series of blog posts associated with my keynote at the Italian e-learning society conference. Having set the scene in the talk in terms of taking an historical perspective on e-learning policy and perspectives I then moved on to consider current trends and future directions. I highlighted four recent reports which provide an indication of where technological developments are going, namely
The Horizon reports series, which provides an annual snap shot of technologies which are likely to have a signiticant impact in one, three and five years. For 2009 the following six are listed as technologies to watch:
- Now and in the next year: Mobile and cloud computing
- Over the next three years: Geo-everything and the personal web
- In five years time: Semantic aware applications and smart objects
The NSF cyberlearning report also considered current technological developments but considered the implications for education and provided a series of recommendations.:
- Help build a vibrant cyberlearning field by promoting cross-disciplinary communities of cyberlearning researchers and practitioners including technologists, educators, domain scientists, and social scientists.
- Instill a “platform perspective”—shared, interoperable designs of hardware, software, and services—into NSF’s cyberlearning activities.
- Emphasize the transformative power of information and communications technology for learning, from K to grey.
- Adopt programs and policies to promote open educational resources
- Take responsibility for sustaining NSF-sponsored cyberlearning innovation.
The IPTS report provides a database of over 200 case studies of the use of web 2.0 technologies in education
The edited book “The collective advancement of education through open technology, open content and open knowledge” provides a summary of the spirit of the increasingly prevalent “open movement”, including of course the open educational resource movement.
I then argued that there is (and indeed always has been) a co-evolution of tools and users. From the first very rudimentary communications between humans, through to the development of different forms of symbolic representation (alphabet systems, numerical representations, graphics and symbols) and finally on to the various ways of technological mediation over the last hundred years or so. I quoted Pea and Walllis from the cyberlearning report:
We can now interact at a distance, accessing complex & useful resources in ways unimaginable in early eras.
And posed the question, what next?
I then focused in a little more specifically on the actual affordances of new technologies and argued that there is a very good match to what is current thinking in terms of what constitutes good learning.
So the various patterns of behaviour evident in web 2.0 practices maps well to the general shift from a focus on the individual to the social aspects of learning. Similarly location aware technologies clearly have potential in terms of contextualised and situated learning. Similarly, adaptation and customisation maps well to notions associated with personalised learning.
The immersive and 3D/real time environments in tools such as Second Life offer opportunities to set up authentic and experiential learning opportunities. The automatic habit of “Goggle it!” as a mechanism for finding out information could with appropriate learning activities be channelled to enable learners to adopt more inquiry-based learning approaches, indeed I would argue that this is important as otherwise learners will not be able to make informed critical choices about the information they are presented with.
Different patterns of behaviour are emerging from observation of gaming environments and in particular community-based systems such as World of Warcraft. The notion of peer credited expertise and levels of attainment of expertise could clearly be applied to fostering peer-learning approaches. User-general content and open educational resources are in many ways synonymous, however to fully exploit their potential we need better ways of helping users to deconstruct and repurpose these resource for their own context. Peer support and critiquing is also evident within the blogosphere, which also offers a lot in terms of self-reflection.
Finally, the enormous potential of cloud computing means perhaps we are on the brink of moving towards a dynamic, shared collective intelligence, which maps to notions of distributed cognition.
Despite all these possibilities, I argued that the rhetoric doesn’t map the reality. There are a range of complex reasons for this and a set of fundamental tensions; between integrated IT systems vs. loosely coupled tols, student controlled vs. institutionally controlled tools and personalised vs. institutional tools. I argued that there is no simple answer at the moment, it is not a question of either or for these, but that it is important we are aware of these and adjust our institutional policies appropriately. I pointed the audience to the recent resurgence of the VLE/LMS is dead debate; I suspect variants on this will continue for some time to come!