I believe that Prensky (2001) offers the most concise definition of a “digital native” when he describes those born between 1982 and 1991 (Oblinger 2005) as “native speakers, of the digital language of computers, video games and the Internet”; unaware of a reality without these common technological conveniences. In my opinion, I believe that the concept of the digital native has been derived neither from empirical research or popular thought, but from well analyzed surveys.
As a scientist, empirical research suggests that it is something that can be “tested”. In this case, as a social phenomenon, I do not believe it can be tested easily in an empirical and unbiased manner. Bullen, Morgan, Belfer, and Qayyum (2009) outline a how many of the so-called research studies that claim to either support or negate the concept of the digital native are flawed in their scientific method and design. Their criticisms even name some of the exact readings we have read for this assignment as poor examples of research which go on to fuel popular claims or ideals (Bullet et al 2009).
Having said that, I believe the concept of the digital native is more than popular thought, it is an analysis of differences in the human experience over time. In Selwyn’s (2009) analysis, The digital native—myth and reality, he outlines the current published body of work regarding the concept of the digital native and in those summations highlights how it can be both a “myth and reality”. In it, Selwyn reviews literature from dozens of professional sources, including Prensky (2009). There is some credence for both sides of the argument, yet concrete data is limited.
The question then becomes do “I” believe there is such a thing as a digital native? Based upon my own experience, I do. Each new generation is itself a new native. As technology advances, so do our methods for incorporating technology into our lives and into the ways we learn. And while I believe that the concept of the digital native has relevance, I have concerns in how a digital native is defined and labeled. I believe that one major flaw in Prensky’s assertion of the digital native is that those of us born before the advent of certain technologies are somehow better able to master technology, and that anyone else is plagued by a lifelong learning “accent”. (Prensky 2001). In this respect, Prensky seems to disregard the motivation of the individual. I believe that anyone can learn if given the proper instruction and if they possess motivation and interest. By assumming that only a child has the innate ability to master technology (Selwyn 2009) if it has been introduced from infancy, the motivation and mastery of adult learners is dismissed.
Both Selwyn (2009) and Oblinger (2005) discuss the common belief that these digital natives, because of their proficient use of the myriad technologies available, are somehow more advanced learners because they are able to multi-task. But is multi-tasking inherently better? Does it make an individual “smarter”? Are those students in the Kansas State University video any better able to learn and succeed because they have access to all that technology? In some cases, yes. The possibilities for incorporating a variety of teaching strategies to make our lessons relevant and meaningful can be achieved using advanced technology, such as computer simulations. But I can’t help think of the opposite side of the coin, however. Can those many types of pieces of technology that the students use; computers, TV, cell phones, IM, etc., be more of a hindrance by providing too much distraction? In many cases, yes.