Australia’s changing how it regulates the internet — and no-one’s paying attention


When we scroll online, most of us give little thought to what goes on behind the scenes – who decides what content we can or cannot see.

Often that decision is in the hands of companies: Facebook, TikTok, and most major social media platforms have rules about what material they accept, but enforcement can be patchy and far from transparent.

In recent years, the federal government has also passed a series of often-controversial laws that give it more control over what’s online.

READ:  Semiconductor alliances between U.S. and Asia could hold back China

For example, there is the new online security law, which was passed at lightning speed in the middle of last year.

Among other powers, the tech industry — which includes not only social media but also messaging services like SMS, internet service providers, and even the company behind your modem — must come up with new codes that regulate “harmful online content.”

Produced by industry bodies, these codes will say a lot about how our technology is governed, but some fear they could have unintended consequences, not least because they lean on an outdated classification scheme.

READ:  You can improve your camera a lot without changing your mobile. You only need this app

what are the codes

Following the enactment of the Online Safety Act, the eSafety Commissioner directed the industry to develop draft code to regulate “harmful online content”.

As defined by the eSafety Commissioner, this “harmful” material is referred to as “Class 1” or “Class 2”.

READ:  Virtual Meeting Route 17 On-Ramp to Route 9 North Middletown

These are borrowed from the National Rating Scheme better known for the ratings you see on movies and computer games. More on that in a moment.

In general, you can think of Class 1 as a material that would be refused classification, while Class 2 could be classified as X18+ or R18+.

Ultimately, the industry has developed draft codes that describe how they will implement safeguards against access or distribution of this material.

A blond woman with shoulder-length hair is talking to two other women behind her on either side
e-Safety Commissioner Julie Inman Grant is overseeing the new Online Safety Act.(ABC News: Adam Kennedy)

They vary depending on the industry and the size of the company. For example, the Code might require a company to report objectionable social media content to law enforcement, have systems in place to take action against users who violate policies, and use technology to automatically detect known child sexual exploitation material.

What content is affected?

At the moment the draft codes only deal with Class 1A and IB material.

According to eSafety, Class 1A may include child sexual exploitation material, as well as content that advocates terrorism or depicts extreme crime or violence.

Class 1B, on the other hand, might contain material showing “matters of crime, cruelty or violence without justification” and drug-related content, including detailed instructions on prohibited drug use. (Grades 1C and 2 deal primarily with online pornography.)

The problem, critics argue, is that Australia’s approach to classification is confusing and inconsistent with community attitudes. The National Classification Scheme was enacted in 1995.

“The classification scheme has long been criticized for capturing a whole range of materials that are perfectly legal to create, access and distribute,” said Nicolas Suzor, an internet governance researcher at Queensland University of Technology.

And rating a film for cinemas is one thing. Categorizing content online at scale is something else entirely.



Source link