Montana Governor Greg Gianforte has signed a law banning TikTok, with the law set to come into effect on January 1, 2024. The state is aiming to punish companies like Apple and Google with a $10,000 fine for every day that TikTok is still available on their stores. TikTok was widely expected to take the matter to court, but a spokesperson for the social media platform declined to say whether the company would file a lawsuit.
Montana has banned video-sharing app TikTok from personal devices, with the law set to take effect on 1 January. The app's owner, Chinese company ByteDance, has increasingly been scrutinised by governments globally concerned over data privacy. The US government prohibits the use of the app on government devices and the ramifications of Chinese Communist Party surveillance were cited by Governor Greg Gianforte as reasons behind the Montana law. The law will not prevent Montanans who already have TikTok from using it, but could lead to Apple or Google facing penalties of up to $10,000 if they allow the app to be downloaded from their stores.
The Canadian province of British Columbia has a government programme to trace owners of unclaimed sums that were formerly in old bank accounts, with government departments, or held as unclaimed court payments. BC Unclaimed, formerly known as the BC Unclaimed Property Society, is responsible for connecting abandoned or forgotten funds with their rightful owners. However, the society faces scepticism, with concerns that its letters are scams. Despite any doubts, it might be worth searching the society's online database to see if you could be owed the $190m of unclaimed money in British Columbia.
U.S. lawmakers haven't yet regulated Big Tech. Artificial intelligence could be more challenging
CBC
23-05-18 08:00
Computer scientists and policymakers face a regulatory puzzle in developing a legal definition for artificial intelligence (AI) and creating regulations for its use. At a US Senate Judiciary Committee hearing last week, Sam Altman, head of artificial intelligence firm OpenAI, advocated for a series of regulations to ensure that the negative impacts of increasingly powerful AI systems could be minimised; his firm has created ChatGPT, which enables chatbots to provide human-like responses. While some panellists and senators discussed AI as a broad concept, others addressed it more narrowly. The regulatory challenge lies in creating rules that protect against the potential harm posed by new technologies while encouraging innovation. One concern is that any federal agency or regulatory programme is likely to add constituencies who use government power to further their own interests. Others warn that too much regulation could stifle AI industry growth.
Artificial intelligence (AI) is not a threat to humanity but instead it is our lack of humanity that is threatened, according to this week’s City A.M. editorial. Although AI can provide good, relevant responses to queries, it only has the ability to reproduce humans at their most machine-like and cannot understand emotions in the way this article suggests. Rather than waiting for an almighty technological takeover, society can take control of AI in the same way it has been shaping technology before via some form of government regulation. One of the ways artificial intelligence could have a positive impact on humans is by enabling us to quickly tap into the knowledge base and wisdom of those done before us in a more efficient way. However, there is a significant downside to AI in that it seeds confusion and potential societal discord by enabling our worst impulses and by promoting human intolerance and misinformation.
Developed nations are struggling with a shortage of workers which is hitting every sector of the economy. The UK, the US and EU are all grappling with the challenge, which has hit skilled jobs in the technology sector particularly hard. The skills gap has been caused by a lack of interest in maths and science by students at all levels, a problem that has led to too few people having the technical skills required by employers. An in-depth report in Raconteur explores the issue, which government policies have failed to tackle, and suggests that initiatives aimed at upskilling the workforce have been viewed as ineffective by business leaders.
While artificial intelligence (AI) raises some concerns over its impact on society, economists writing in the Financial Times argue that policymakers should be focusing on the distribution of the productivity gains AI could potentially bring. This issue concerns intellectual property rights, an issue which is predicated on who controls access to the technology. At one end of the spectrum there is a completely proprietary world, in which the most useful AI is the intellectual property of a monopolistic or oligopolistic company. At the other extreme, there is an open-source world where productivity gains would be earned by those who deploy the technology. Governments could legislate to increase transparency and access to such technologies.
The US is struggling with a shortage of major drug treatments, with some cancer patients struggling to get chemotherapy and many ADHD patients struggling to obtain medications for their condition. Some parents have also been struggling to find children's Tylenol as supplies have been low. Experts have called the ADHD shortage a "public health emergency." The shortages reflect generic drug supply chain issues, where drug companies are often under pressure to offer the lowest prices possible, leading them to cut corners to reduce costs and keep prices low, endangering supply chains. The shortages also hit some brand-name products, leading even more people to be without the drugs they depend on. Demand for ADHD medication has surged in recent years as more people accept the importance of mental health issues and policy changes due to COVID-19 that increased the use of telemedicine.
Montana has become the first US state to enact a ban on TikTok, a move that may conflict with the right to freedom of speech enshrined in the First Amendment. From January 2024, it will be illegal for the short-form video app to operate in the state, although it is unlikely that the legislation can be enforced effectively. Under the new law, TikTok and other companies risk a penalty of $10,000 per user for each time the app is accessed or downloaded. However, geofencing technology that would enable app stores to block downloads from states where such legislation is introduced does not currently exist. While TikTok may be capable of identifying the geographical location of its users through mobile phones, the company could face fines from the state if it attempted to lock-out Montanans.
OpenAI has launched an app to allow wider access to its sophisticated ChatGPT chatbot, which handles inquiries and responds with an artificial intelligence-generated answer. Available from the Apple App Store, the mobile product mirrors the website, but also incorporates voice recognition tool Whisper. The groundbreaking chatbot was used by more than 100 million people in January this year and has grown in popularity since it was first launched in November, leading to concerns from AI ethicists over the technology's potential for misuse. The app is now set for release on Android devices in the coming weeks, OpenAI said.
The US Supreme Court has decided not to overhaul internet publisher legal protections in two cases that could have seen significant changes to laws governing online platforms like Google and Twitter. The cases were centred around Section 230 of the Communications Decency Act, which protects online platforms from legal liability regarding content posted by users. The cases were brought following accusations by families of people who died in ISIS attacks that the companies had aided the terrorist group by allowing it to disseminate content through their channels. Judges ruled that the companies were not at fault.
Family members of terrorism victims have failed to demonstrate that either Twitter or Google helped terrorists in a liability case before the US Supreme Court. The case centred around whether social media companies provided “substantial aid” to terror groups through their algorithms. In the Twitter case, US relatives of Nawras Alassaf argued that the platform failed to adequately monitor content and accounts related to ISIS ahead of a terrorist attack in Turkey which claimed Alassaf’s life. Parisian Mourad Hamyd claimed that YouTube, owned by Google, should be liable for content promoting ISIS following the Paris attacks, after his brother, also an attack victim, was implicated by “false statements” in a video.
The US Supreme Court has rejected the opportunity to review a case against Google that pitted the family of a student killed in an Islamic State (IS) attack against the internet giant company. The complainants alleged that Google's YouTube had played a role in radicalising individuals who then went on to join the terrorist group. Google has claimed protection against court action under a 1996 law that usually prevents social media companies from being held responsible for user-generated content. The case will be passed back to the lower courts for further consideration.
It is seen as a victory for the technology industry, which had argued that a ruling against Google could have far-reaching implications, with opponents suggesting it could stifle free speech online, and lead to more frivolous lawsuits against social media firms.
The court also ruled on a similar case involving Google, Twitter, and Facebook that argued the three companies should be held liable for a terrorist attack at a Turkish nightclub that claimed 39 lives. The court unanimously ruled against the case continuing under a law that forbids aid to terrorists.
Twitter cannot be held liable for allowing an attack on an Istanbul nightclub that killed 39, according to the Supreme Court. Twitter was accused of violating the Anti-Terrorism Act because it gave ISIS a platform to release propaganda that spark the attack. A group of Americans who were relatives of a Jordanian killed in the attack filed a suit against Twitter, which a court of appeal had revived in 2021 but the US Supreme Court has ruled cannot proceed. The ruling shows that social media companies cannot be held responsible for user-generated content, a decision widely welcomed by other internet giants.
As geopolitical tensions involving China continue, Asia's subsea cables—responsible for carrying much of the world's data—are seeing delays and higher costs due to difficulties in stringing lines along the floor of the South China Sea. Companies such as Google, Nokia, and Japan's NTT are attempting to bypass the contested area, which Beijing has claimed almost entirely for itself.
The US Supreme Court has thrown out a case aimed at making Google and Twitter liable for videos that endorsed terrorist group Islamic State. Two similar but separate cases against the companies were considered. The Supreme Court upheld the Communications Decency Act's Section 230, which prevents websites from being held liable for content posted by users. The family of Nohemi Gonzalez, who was killed in a terrorist attack in Paris in 2015, had claimed Google shared advertising returns from YouTube videos containing Isis propaganda. The court rejected the case and related claims against Twitter.
The US Supreme Court has protected social media platforms such as Google, Facebook and Twitter by rejecting one lawsuit involving a lethal attack on a Turkish nightclub and tossing a second case back to a lower court. The move preserves a law known as Section 230, which provides legal immunity for social media under which they cannot be held responsible for the material posted on their platforms. The court's move has highlighted calls for Congress to revise the law, adopted in 1996, that provides such legal protection.
A recent debate at Melbourne University on the future of Australia's constitutional monarchy, hosted by the Robert Menzies Institute, has ended in protesters shouting at the University's Deputy Lord Mayor, Nicholas Reece, a former labor party state secretary. CBD is inviting Melbourne University to introduce a compulsory module called Google 101 to help enrollees differentiate between different politicians. Meanwhile, Uber rides worth AUD15,000 ($11,640) were taken by Australian public servants in the past year, compared to AUD4000 previously, according to the recently released Freedom of Information documents. Cab rides, however, still cost public servants AUD59,118.20 in 2022 to 2023.
The US Supreme Court has issued its first judgment on whether Section 230, which shields online services from liability for users’ content, is valid. The decision was unanimous, and found that Twitter and Google did not require legal protection against liability for hosting terrorist content on their platforms. The rulings related to two cases in which the families of ISIS terrorist attack victims took legal action against the sites, claiming that their recommendation algorithms had aided the group by circulating its content. Observers had anticipated the outcome as a potential shift in tech firms’ legal protection.
Montana's new law banning the use of TikTok in the state is facing a legal challenge from five plaintiffs arguing that the law is an unconstitutional violation of free speech rights because Montana does not have authority over matters of national security. The law, which is due to take effect in 2024, was signed by the Republican Governor Greg Gianforte, but it is unclear whether it could be enforced as Apple or Google would be liable for any violations, and sanctions would not apply to users. More than 200,000 people and 6,000 businesses reportedly use TikTok in Montana.