When we scroll online, most of us give little thought to what goes on behind the scenes – who decides what content we can or cannot see.
Often that decision is in the hands of companies: Facebook, TikTok, and most major social media platforms have rules about what material they accept, but enforcement can be patchy and far from transparent.
In recent years, the federal government has also passed a series of often-controversial laws that give it more control over what’s online.
For example, there is the new online security law, which was passed at lightning speed in the middle of last year.
Among other powers, the tech industry — which includes not only social media but also messaging services like SMS, internet service providers, and even the company behind your modem — must come up with new codes that regulate “harmful online content.”
Produced by industry bodies, these codes will say a lot about how our technology is governed, but some fear they could have unintended consequences, not least because they lean on an outdated classification scheme.
what are the codes
Following the enactment of the Online Safety Act, the eSafety Commissioner directed the industry to develop draft code to regulate “harmful online content”.
As defined by the eSafety Commissioner, this “harmful” material is referred to as “Class 1” or “Class 2”.
These are borrowed from the National Rating Scheme better known for the ratings you see on movies and computer games. More on that in a moment.
In general, you can think of Class 1 as a material that would be refused classification, while Class 2 could be classified as X18+ or R18+.
Ultimately, the industry has developed draft codes that describe how they will implement safeguards against access or distribution of this material.
They vary depending on the industry and the size of the company. For example, the Code might require a company to report objectionable social media content to law enforcement, have systems in place to take action against users who violate policies, and use technology to automatically detect known child sexual exploitation material.
What content is affected?
At the moment the draft codes only deal with Class 1A and IB material.
According to eSafety, Class 1A may include child sexual exploitation material, as well as content that advocates terrorism or depicts extreme crime or violence.
Class 1B, on the other hand, might contain material showing “matters of crime, cruelty or violence without justification” and drug-related content, including detailed instructions on prohibited drug use. (Grades 1C and 2 deal primarily with online pornography.)
The problem, critics argue, is that Australia’s approach to classification is confusing and inconsistent with community attitudes. The National Classification Scheme was enacted in 1995.
“The classification scheme has long been criticized for capturing a whole range of materials that are perfectly legal to create, access and distribute,” said Nicolas Suzor, an internet governance researcher at Queensland University of Technology.
And rating a film for cinemas is one thing. Categorizing content online at scale is something else entirely.
Consider potential Class 1B material – instructions relating to crime or information about prohibited drug use.
There are scenarios where, hypothetically, we would like to have such information available, suggested Dr. Suzor proposed how to make information about safe medical abortions available to people in certain states in the United States.
“These are really difficult categories to apply to any kind of ‘internet scale’ because you’re clearly getting into all the gray areas,” he said.
There was a recent review of Australia’s classification regulation and a report was submitted in May 2020, but it’s still unclear how this could affect proposed industry codes designed to regulate “harmful online content”.
Do companies now have to monitor my messages?
The codes are intended to affect almost every industry that touches the internet, and there are concerns about how privacy could be compromised if applied to personal messages, files and other content.
Some major social media platforms are already using digital “fingerprinting” technology that attempts to proactively identify known child sexual exploitation or pro-terrorist material before it is uploaded.
The eSafety Office has expressed an interest in the codes, which require a level of proactive monitoring – capturing “harmful” content before it is released.
However, in the draft codes, when it came to private file storage or communication, industry groups said expanding proactive detection could have serious privacy implications.
There are also concerns that the Codes could embed an approach to content moderation that’s really only available to the big players. Scanning tools are not necessarily cheap or readily available.
“Many of these proposed solutions will require big tech to stay big to meet these compliance requirements,” said Samantha Floreani, program director at Digital Rights Watch.
A spokesman for eSafety said it would not expect industry codes to impose the same level of obligations on smaller companies as larger companies.
Then there is the question of whether proactive detection systems are accurate and whether there are any appeals.
Gala Vanting, national program manager at Scarlet Alliance, said the use of this technology is of particular importance to those in the sex work industry.
“It is very likely that content will be over-captured. It’s very clumsy at reading context [around] sexual content,” she said.
Another complicating factor is that there is also a privacy law review going on, which could impact how these codes work. For example, by introducing requirements that could restrict scanning.
A spokesman for Attorney General Mark Dreyfus said the department would present a final report later this year recommending reforms to Australia’s data protection law.
The draft industry codes are now open to public feedback. Then the Office of the eSafety Officer will check if they think the codes are correct.
However, according to some reports, the consultation has been uneven and many civil society groups consider the consultation window to be too small and unrealistic.
There is also some frustration that the codes are being developed ahead of the privacy law review, among other possible online regulatory changes on the table, which could result in a rather confusing regulatory regime for online content.
Then there is the debate about whether Australia is even taking the right approach to these issues.
The Online Safety Act itself has been controversial – particularly because of the discretion it placed in the hands of the Communications Secretary and the eSafety Commissioner.
“While there would be some self-evident materials that would not pass muster … there is tremendous power in the hands of one person who actually sets what the community’s expectations are,” said Greg Barns of the Australian Lawyers Alliance.
“The broader questions of what constitutes harm then begin to merge with issues of freedom of expression, but also transparency and accountability.”
dr Suzor said he’s generally “completely on board” with the idea that governments want more say in the standards of acceptable content online.
But in practice, he indicated that there isn’t much clarity as to what the codes were designed for.
“The codes are agreements to basically do what the industry is already doing, at least the broader end of the industry,” he said.
“Actually, I don’t know what they’re supposed to achieve, to be honest.”