SGGP
The Australian government has released proposals for a plan to increase the obligations of online companies to protect children and other internet users from harmful content.
Australia hopes proposed regulatory changes will keep children safer online |
The proposed changes come after criticism that the government was not doing enough to keep children safe online, Communications Minister Michelle Rowland said.
The proposed changes would extend the scope of regulation to include algorithms, synthetic artificial intelligence (AI) and ensure that the best interests of children are at the forefront when designing any service, Ms Rowland said.
“We know that children are vulnerable to harmful content online and it is important that their best interests are prioritised throughout the design and implementation of services,” Ms Rowland said, explaining why children should be at the forefront of the design of content they can access. The consultation on proposed changes to online service providers, including social media services, will close in February 2024. In August, Australia’s online safety watchdog eSafety called for regulatory oversight of the communications technology industry.
According to eSafety, the agency has received complaints about children using computers to create pornographic images of their peers for the purpose of bullying. eSafety states that service providers must take “reasonable steps” to proactively minimize the extent to which AI capabilities create material or conditions for illegal or harmful activity, including AI-generated fake movies.
eSafety has warned that stricter age verification procedures are needed on websites used by children to prevent them from being forced to create sexually abusive material or content. Analysis of more than 1,300 reports of child sexual abuse received by eSafety found that 1 in 8 children “created” the content themselves because “predators” forced them to film and photograph themselves in sexually explicit acts. In fact, children can easily bypass age verification when participating in online websites.
On November 26, the US, UK and more than 10 other countries announced an international agreement on AI safety. According to US officials, this is the first detailed agreement on how to ensure AI technology is used safely, and at the same time urges technology companies to create safe AI systems right from the design stage. According to the document, 18 countries have agreed that companies designing and using AI need to develop and deploy this advanced technology. The agreement is not binding, mainly giving general recommendations, such as monitoring AI abuse, data protection, etc.
Source
Comment (0)