Earlier this month I spent a week at the National Data Archive for Child Abuse and Neglect investigating the correlation between guardian behaviors (rules, filters, surveillance, etc.) and youth online safety.
I’ve recently become interested in how technology use among youths is regulated within the family. Parents tend to learn about parenting mostly from their own parents, but that resource is of limited use with new technologies, so it raises the questions of how and from whom guardians are learning about how to deal with youths and digital media.
I quickly discovered that there are many studies of youths’ online experiences, but very few studies of how guardians regulate their youths’ use of digital media–much less correlating those guardian behaviors with online safety. So I took the opportunity to use the NDACAN’s data to look more closely at this relationship.
“Online safety,” in this context, refers to two classes of events: (1) exposure to illicit or threatening content against the youth’s will, such as being cyberbullied, exposed to pornography, or sexually solicited, and (2) unsafe behaviors initiated by the youth, such as revealing too much personal information or participating in cyberbullying or sexual activity.
Two big caveats: (1) The data is ten years old—equivalent to the paleolithic era in Internet time. (2) There’s no time-order data, so there’s no way to tell which came first, the guardian strategy or the unsafe event. I’ll address these limitations in a new study I’m planning for the fall, which will update the instrument and should allow an interesting comparative analysis of how guardian and youth behaviors have changed over the past decade.
Despite these limitations, the results were interesting, and since so many public discussions of Internet safety rely on anecdotes, media sensationalism, and biased “studies” conducted by advocacy groups, I thought I’d share my main findings:
(1) The effect of guardian strategies is small but significant. Guardian strategies collectively accounted for about as much variance as demographic controls such as age and sex, but had just one-fourth the explained variance of variables assessing access in the home, frequency of use (days per week), intensity of use (hours per day), and diversity of use (do they use e-mail, do they use IM, etc.).
(2) A good relationship is important. When the guardian and youth both agreed that they got along “very well,” the youth was less likely to have had an unsafe incident.
(3) Learning about the Internet is important. Youths with guardians who knew as much or more about the Internet as they did were less likely to have an unsafe event than youths who knew more about the Internet than their guardian.
(4) Talking about the risks is important. Among frequent youth users, the more their guardians had talked with them about online dangers, the less likely they were to have had an unsafe event.
(5) Having rules is important. “Rules,” in this context, refers to an understanding between the guardian and youth about what (s)he can or cannot do, like time limits, curfews, or designating certain sites or types of interactions as off limits. (It does not necessarily mean that the youth’s activities are monitored or the rule is enforced.) Youths who had more of these understandings were less likely to have had an unsafe event.
(6) The jury is still out on filters and surveillance. Blocks and filters on home computers were insignificant predictors of whether or not youths experienced an unsafe event. Surveillance (which ranges from the guardian occasionally peeking over the youth’s shoulder to checking browser histories or using keyloggers), meanwhile, was positively correlated with unsafe events.
As I stressed earlier, it’s impossible to know whether guardians with youths who have been unsafe online implement surveillance by way of response, or whether guardians implement surveillance first and then the youth figures out a way around it and subsequently experiences an unsafe event. So I definitely wouldn’t say that the evidence shows that surveillance doesn’t work—but I also didn’t find any evidence that it works either.
I was surprised that the autocratic methods, such as filters and surveillance, were less effective than the more collaborative methods, such as conversations and rules. In a content analysis of newspaper coverage of sexting that I’m preparing for the ASA annual meeting in August, I found that the autocratic methods were commonly and aggressively advanced by law enforcement officials, editorialists, and even parenting experts. Guardians were systematically portrayed as admirable, autocratic tough-lovers or, conversely, negligent enablers.
Given the popularity of autocratic methods in the media—and the potential damage to the guardian-youth relationship they risk—it will be interesting to see in future studies whether they are actually more effective than talking and setting some guidelines.