Artificial intelligence (AI) may be a cause for job losses, but new advances in technology can also lead to the creation of new jobs, according to experts. Erik Brynjolfsson, a professor at the Stanford Institute for Human-Centered AI, said he wished people would realise the scope for undertaking tasks that had never previously been possible. Researchers at OpenAI and the University of Pennsylvania found that in about 80% of jobs, at least 10% of tasks could be automated using generative AI, which can produce things like text and images. However, the new technology may complement human labour, allowing workers to focus on new areas of work. Brynjolfsson cited the example of an AI tool that generated live responses for call centre staff to give to customers, boosting productivity by 14% on average and improving customer sentiment.
Beijing's government has called upon major Chinese tech firms to quicken the development of artificial general intelligence (AGI) as it increases its focus on the sector to compete with the US. This week the city's technology promotion agency began inviting AI, augmented reality and virtual reality research projects, with up to CNY60m ($8.6m) to be awarded for up to 12 projects over two years. Authorities are urging partners to work on the development and applications of large language models, the technology that underpins generative AI such as Alibaba Cloud, Alibaba Group, Baidu and 360 Security Technology.
The emergence of artificial intelligence (AI) poses potential issues for discrimnation, damaging democracies and staging a mass elimination of jobs, and tech leaders are wary of unintended consequences. The concerns over AI have led to calls for a law to regulate AI systems, codifying penalties for bad actors, as well as prompting fears over the ability of the technology to manipulate and persuade voters. Following the missed opportunity of regulating social media, world leaders are being urged to come together to monitor and regulate AI, with a piecemeal approach being deemed inadequate.
The Hong Kong University of Science and Technology's Centre for Education Innovation has urged its professors to use artificial intelligence (AI) systems in their lessons. Staff have been allowed to come up with their own guidelines for AI's use and HKUST plans to spend at least $10m on generative-AI applications in curriculums. Sean McMinn, director of the centre, said using AI in education can lead to an increase in student performance and preparation for working with technology in the future. Students can use ChatGPT, an AI generator built by OpenAI, to create audio, video, image or text-based content.
The use of artificial intelligence (AI) by students for academic purposes is growing rapidly, with schools and universities forced to rethink how they conduct tuition and academic testing. Many students are using AI tools to help with academic work, with some students using it to cheat on assignments and exams. However, educators and students are also cautiously experimenting with the use of generative AI to enhance lessons, and questioning whether it is possible to use AI in education without undercutting the most important features of human learning. One of the major challenges with generative AI is accuracy, including hallucination, the fabrication of facts and the black box effect, which produces false information and creates a vacuum for content to be reframed. There is also evidence that AI-written text can be biased and learned from internet content, including sexism, racism and political partisanship.
The cost of AI infrastructure, particularly generative AI, is so high that few companies can afford to build it. For example, even service provider OpenAI reportedly bled $540m in one year as it developed its products. Building the sort of technology on offer by Microsoft, Google and Amazon would require firms to make significant investments in state-of-the-art chips and prize-winning researchers. “People don’t realise that to do a significant amount of AI things like ChatGPT (OpenAI’s text generator tech) takes huge amounts of processing power. And training those models can cost tens of millions of dollars,” said Jack Gold, an independent analyst. As a result, companies will need to rely on platforms provided by tech giants such as Microsoft, leaving them less control over their infrastructure and augmenting the influence of already-dominant players.
The Group of Seven (G7) nations has agreed to launch the Hiroshima AI process, an initiative that will address opportunities and challenges of generative artificial intelligence through improved collaboration among countries. A working group including other international organisations, such as the Organisation for Economic Cooperation and Development, will facilitate discussions and draft AI standards for “responsible AI”, according to a communique. A recent report by the OECD cited 92 measures related to data localisation enacted in 39 countries as of 2021, with tougher regulations increasingly implemented over the previous five years. The G7 is proposing ways to secure smooth cross-border data flows in a trustworthy manner.
Artificial intelligence (AI) has been accused of causing significant harm to the world by Sam Altman, the creator of a chatbot. Altman told a US Senate hearing that the worst fears about AI were that it could go wrong significantly, which was why there was a need for stricter regulation. With the use of AI, particularly on social media, shaping beauty ideals at present, an eating disorders organisation last week created images of the ideal man and woman according to social media usage. AI generally displayed big hair, big muscles, olive skin, perfect teeth and big breasts and was mostly blonde for women and dark and handsome for men.
Ranking business schools offering different executive education courses is a complex and difficult task, due to the variety of content, clients, and duration of the courses. The pandemic has shifted demand for courses, and this report describes the challenges in ranking the courses and offers readers an opportunity to suggest modifications to better select and evaluate the programmes. The report highlights cultural diversity, sustainability and technology as the three main topics of concern in executive education today. The report also highlights the work of Duke Corporate Education which ranked top for custom programmes and the innovative programme combining an MBA, leadership training, and executive education course to tackle the low representation of Arab Israelis in business.
The Chinese Cyberspace Administration has prohibited the country's operators from buying US chipmaker Micron Technology's products over concerns of “serious network security risks”. The ban is the first big measure taken against a US semiconductor group and follows a seven-week investigation into Idaho-based Micron by the CAC, seen as retaliation for US efforts to curb China’s access to key technology. Washington introduced extensive chip export controls last October and since then, the Netherlands and Japan have followed. China is a crucial market for Micron and analysts have warned that Beijing’s restrictions could even provoke Chinese firms that provide “critical information infrastructure” to eliminate Micron from their supply chains.
Regulators are using old legislation to control emerging generative AI tools like ChatGPT, as they draft new AI rules to address privacy and safety concerns. European Union (EU) national privacy watchdogs set up a task force to address issues with ChatGPT, with investigations also launched into OpenAI's compliance with privacy laws by France and Spain. Regulators are "interpreting and reinterpreting their mandates" in the wake of advances in generative AI technology, covering everything from data privacy to the content produced by AI models. It will take several years for new legislation addressing AI issues to be enforced.
Gen Z employees entering the workforce have become detached due to the disruption of university experiences during the Covid-19 pandemic, according to a recent report by Oliver Wyman and The News Movement. The study showed Gen Z’s first jobs became “two-year video calls” due to the pandemic. However, despite some concerns, many young workers have supported recent shifts to more remote working. Gen X and millennial managers were urged to consider social events like happy hours. Staff entertainment budgets were cut during the pandemic, but there is a case for subsidising informal occasions for colleagues to learn from each other in a less forced setting than a training course.
Academic and investment guru Eugene Fama has argued that market pricing is better at determining stock and bond prices than AI at present. Fama said that while machine learning and other kinds of AI have the potential to do some executions better, they cannot predict future outcomes like markets can. Citing efficient market hypothesis, which he describes as 50 years old and proven, he said AI has to catch up with real time judgement making and millions of actions to overtake market pricing. He also noted that stock markets had returned about 10% a year over nearly a century.
Hiroshima's tragic legacy a reminder of potential dangers of today's no-limits technologies
CBC
23-05-22 12:45
Leaders of the G7 nations have urged the adoption of technical standards to maintain trust in artificial intelligence (AI). The development of generative AI, which can create realistic text, has been a source of concern for some who argue it could potentially create realistic-looking so-called "deepfake" videos or other forms of misinformation. Many tech entrepreneurs and AI researchers, including Elon Musk and Apple's head of machine learning, recently called for a pause in generative AI's development, citing potential risks. Politicians have also been increasingly concerned: the EU is on the verge of signing the world's most comprehensive law to regulate AI. Meanwhile, US President Joe Biden has appeared more relaxed, saying it remains to be seen whether AI poses a danger.
Worldcoin, a cryptocurrency initiative backed by Silicon Valley luminaries such as Marc Andreessen and Sam Bankman-Fried, is gaining renewed momentum with its plan to create “World ID,” which will use the biometric data of iris scans to distinguish between people and bots. The scheme garnered negative feedback for scanning half a million irises during its “field test” using a chrome sphere named “the orb,” though it still has a valuation of $1bn and is allegedly about to secure a further $100m. Critics claim the Worldcoin Foundation, which was established by its digital-token allocation system and claims a database of 1.7 million iris-originated codes, lacks transparency, as it is based in the Cayman Islands. Some hackers have also stolen credentials from its employee "operators," including recruiters who sign people up. The company now hopes to integrate its app, which offers crypto transactions, with more traditional financial services in future.