In short Facebook and Instagram’s parent company Meta has been hit with not one, not two, but eight different lawsuits accusing its social media algorithm of causing real harm to young users across the United States.
Complaints filed last week claim that Meta’s social media platforms have been designed to be dangerously addictive, causing children and teens to view content that increases the risk of eating disorders, suicide, depression and sleep disorders.
“Social media use among young people should be seen as a major contributor to the mental health crisis we face in the country,” said Andy Birchfield, a lawyer representing law firm Beasley Allen, who runs the business, in a statement.
“These apps could have been designed to minimize any potential harm, but instead a decision has been made to make teenage drug addicts aggressive in the name of corporate profit. It’s time for this company to recognize the growing concerns about the impact of social media on the mental health and well-being of this most vulnerable part of our society and alter the algorithms and business goals that have caused so much damage.”
The lawsuits were filed in federal courts in Texas, Tennessee, Colorado, Delaware, Florida, Georgia, Illinois and Missouri, according to Bloomberg.
How safe are self-driving vehicles really?
The safety of self-driving car software like Tesla’s Autopilot is difficult to assess, given that there is little publicly available data and the metrics used for such assessments are misleading.
Companies developing self-driving vehicles typically report the number of miles the self-driving technology has traveled before human drivers have to take over to avoid mistakes or accidents. Data, for example, shows that fewer crashes occur when Tesla’s Autopilot mode is on. But that doesn’t necessarily mean it’s safer, experts say.
Autopilot is more likely to be engaged for highway driving, where conditions are less complex for the software to handle than for moving around a busy city. Tesla and other auto companies do not share data for driving on specific roads for better comparison.
“We know that cars using Autopilot crash less often than when Autopilot is not in use,” Virginia Transportation Research Council researcher Noah Goodall told The New York Times. “But are they driven the same way, on the same roads, at the same time of day, by the same drivers?”.
The National Highway Traffic Safety Administration ordered companies to report serious crashes involving self-driving cars within 24 hours of the crash last year. But no information has yet been made public.
AI upstart accused of sneakily using human labor behind autonomous tech
Nate, a $300 million+ startup that claims to use AI to automatically populate shoppers’ payment information on retail websites, actually pays workers to manually enter data for $1.
Buying things on the internet can be tedious. You must enter your name, address, credit card details if a website has not saved the information. Nate was designed to help people avoid having to do this every time they visit an online store. Described as an artificial intelligence application, Nate claimed it uses automated methods to fill in personal data after a consumer places an order.
But the software has been difficult to develop, given the different combinations of buttons the algorithms have to press and the precautions in place on websites to stop bots and scalpers. To try to attract more consumers to the app, Nate offered people $50 to spend online at stores like Best Buy and Walmart. But the upstart struggled to get his tech to work to fill them properly.
The best way to do it? To pretend. Instead, Nate turned to hiring workers in the Philippines to manually capture the consumer’s private information; orders were sometimes completed hours after they were placed, according to The Information. Some 60-100% of orders were processed manually, it was claimed. A spokesperson for the upstart said the report was “incorrect and claims questioning our proprietary technology are completely baseless.”
DARPA wants AI to be more reliable
The US military research arm, DARPA, has launched a new program to fund the development of hybrid neuro-symbolic AI algorithms in the hope that the technology will lead to more reliable systems.
Modern deep learning is often referred to as a “black box”, its inner workings are opaque, and experts often don’t understand how neural networks arrive at an output with a specific input. The lack of transparency means that the results are difficult to interpret, which makes their deployment risky in certain scenarios. Some believe that incorporating more traditional old-fashioned symbolic reasoning techniques could make models more reliable.
“Motivating new thinking and approaches in this space will help ensure that autonomous systems will work safely and perform as intended,” said Sandeep Neema, program manager for DARPA’s new Assured Neuro Symbolic Learning and Reasoning program. “It will be integral to building trust, which is key to the successful adoption of autonomy by the Department of Defense.”
The initiative will fund research into hybrid architectures that are a mix of symbolic systems and modern AI. DARPA is particularly interested in applications relevant to the military, such as a model that could detect whether entities were friendly, adversarial, or neutral, for example, as well as detect dangerous or safe areas in combat. ®