Artificial intelligence (6do encyclopedia)



Artificial intelligence (AI) is a rapidly-growing field of computer science that deals with the development of intelligent machines that can perform tasks typically requiring human intelligence, such as understanding natural language, recognizing patterns, and making decisions. It is a subfield of computer science that is concerned with the design and development of algorithms that can emulate human intelligence to solve complex problems.

AI is being used in a broad range of applications including robotics, medical diagnosis, autonomous vehicles, image recognition, natural language processing, and fraud detection, among others. The technology has already revolutionized the way we live and work, and it has the potential to continue to do so in the future.

Types of Artificial Intelligence

There are three main types of artificial intelligence:

  1. Reactive Machines

Reactive machines are the simplest type of AI, as they are designed to react to specific situations based on pre-defined rules. They do not have an understanding of the past or the future, which means they cannot learn from experience or adapt their behavior. Examples include automated teller machines (ATMs) and chess-playing computers.

  1. Limited Memory

Limited memory AI systems have the ability to learn from past experiences and adjust their behavior based on them. These systems are used in applications like self-driving cars and recommendation systems, which need to make decisions based on a combination of past and current information.

  1. Self-aware

Self-aware AI systems have the ability to reason, understand, and interact with their environment, and are considered the most advanced type of AI. They are able to perceive emotions and have a sense of self-awareness. Currently, there are no self-aware AI systems in existence, and it is still a goal for future development.

Applications of AI

AI has a broad range of applications, and it is already being used in many industries, including:

  1. Healthcare

AI is being used to help doctors make diagnoses and develop treatment plans. It can help to identify patterns in medical images and predict the likelihood of future health problems.

  1. Finance

AI is being used to detect fraudulent transactions and predict market trends. It can also help to provide personalized financial advice based on individual needs and goals.

  1. Manufacturing

AI is being used to optimize production processes and improve quality control. It can also help to predict equipment failures and schedule maintenance.

  1. Education

AI is being used to develop personalized learning plans and provide feedback to students. It can help to identify areas where students are struggling and provide resources to help them improve.

Challenges and Future Developments

While AI has the potential to benefit society in many ways, there are also significant challenges that need to be addressed. One of the biggest challenges is the use of AI in employment. As AI systems become more advanced, they may replace human workers in certain industries, which could lead to job loss and economic disruption.

Another challenge is the lack of transparency in AI decision-making. As AI systems become more complex, it becomes harder to understand how they are making decisions, which could lead to biased or unfair outcomes.

Despite these challenges, the future of AI is exciting. Researchers are continuing to develop more advanced AI systems, and there is a growing interest in the development of self-aware AI. As the technology continues to evolve, it has the potential to transform many aspects of our lives and change the way we think about what it means to be intelligent.


Disclaimer
6do Encyclopedia represents the inaugural AI-driven knowledge repository, and we cordially invite all community users to collaborate and contribute to the enhancement of its accuracy and completeness.
Should you identify any inaccuracies or discrepancies, we respectfully request that you promptly bring these to our attention. Furthermore, you are encouraged to engage in dialogue with the 6do AI chatbot for clarifications.
Please be advised that when utilizing the resources provided by 6do Encyclopedia, users must exercise due care and diligence with respect to the information contained therein. We expressly disclaim any and all legal liabilities arising from the use of such content.

‘Pathogen detectors’ could be installed at sea ports under new government proposals

Telegraph

23-05-11 14:32


The UK government is considering fitting "advanced sensors" in ports across the globe to help detect the spread of infectious diseases and new pathogenic threats. UK Science Minister George Freeman said the same technology would be used in airports too, to improve monitoring of the flow of pathogens across global supply chains. This data would be shared, allowing governments to take prompt action against emerging threats before they escalated. Freeman said the "advanced sensor technology" needed to run the programme already existed, adding that AI would play a critical role in monitoring and detecting infectious diseases.

https://www.telegraph.co.uk/global-health/science-and-disease/pathogen-surveillance-ports-outbreaks-disease-viruses/
The likely winners of the generative AI gold rush

Financial Times

23-05-11 13:20


Google and other large firms developing generative AI risk being outmaneuvered by more agile startups, according to a leaked memo by a Google executive. Such firms may develop the most impressive AI models, but are vulnerable to competitors building cheaper open-source models with the capacity for customisation, the memo said. VC investors have also been betting on a new wave of generative AI startups such as AI21 Labs, which are better placed to move fast and mostly ignore safety controls. At the same time, hardware providers such as Nvidia stand to benefit from generative AI's voracious demand for computing power.

https://www.ft.com/content/0cbe91ec-0971-4ba6-bdf1-87855aedd34c
EU lawmakers agree tough measures over use of AI

Financial Times

23-05-11 12:19


The European Parliament has approved regulations for the region's Artificial Intelligence Act, including the prohibition of almost all facial recognition for monitoring citizens while generative AI models are required to have all conent disclosed. The EU will begin negotiations within the countries involved concerning the changes until the rules are finalized. Governments have stated their willingness to use some limited live facial recognition, such as when addressing terrorist threats. The UK competition watchdog alongside the US Federal Trade Commission have also warned against the potential for abuse of AI.

https://www.ft.com/content/da597e19-4d63-4d4d-b7d1-c3405302a1e3
EU lawmakers take first steps towards tougher AI rules

Deutsche Welle

23-05-11 10:44


Committees in the European Parliament have approved the draft AI Act, legislation on artificial intelligence which also includes amendments to restrict generative AI such as ChatGPT. Members of the committee on civil liberties and on consumer protection voted for the new legislation to place curbs on how the technology can be used, while also allowing for innovation. Lawmakers added an amendment to put ChatGPT and similar generative AI on the same level as high-risk systems, which, when combined with the full parliament's adoption, will seek to become Europe's "landmark legislation".

https://www.dw.com/en/eu-lawmakers-take-first-steps-towards-tougher-ai-rules/a-65585731
AI and space technology boost smallholders’ access to finance

Financial Times

23-05-11 04:26


New approaches to lending for smallholder farmers in the world’s poorest regions are being developed using technologies such as artificial intelligence (AI) and machine learning. Norwegian firm Sensonomics is working with non-profit microfinance firm Opportunity International and the European Space Agency to use high-resolution satellite images to provide information about the vagaries of the weather. In Kenya, Apollo Agriculture is using AI machine learning to assess farmers’ creditworthiness, with funds provided in the form of vouchers for farm inputs like seeds. In Uganda, Opportunity International’s agents use digital processes to run credit checks and reduce loan approval times from 60 days to four. In Ghana and Kenya, loan origination company Mfarmpay is using satellite images and farmers’ information to generate credit scores, while offering farmers insurance, advisory and bundled digital services. India’s Harvesting Farmer Network is using AI, machine learning and mobile data to help its more than 3.7 million member co-operative develop new micro-lending opportunities.

https://www.ft.com/content/59a98dec-f960-4b29-9686-1563442ad055
UK policing minister pushes for greater use of facial recognition

Financial Times

23-05-15 23:19


The UK's policing minister, Chris Philp, has called for the national rollout of facial recognition technology across police forces, according to an academic report submitted to parliament on Tuesday. The systems have faced concerns over their legality and accuracy, and the European Union is soon to ban facial recognition in public spaces due to privacy and rights concerns. The report found Philp "expressed his desire to embed facial recognition technology in policing and is considering what more the government can do to support the police on this".

https://www.ft.com/content/b8477e16-349d-442d-8e69-59b328ba9189
The race to bring generative AI to mobile devices

Financial Times

23-05-16 04:22


Advancements in generative artificial intelligence (AI) could transform mobile communications and computing at a faster pace than expected, according to the Financial Times. Tech firms have been attempting to embed generative AI into their software and services but faced higher computing costs, and increased internet search users come to expect AI-generated content in standard search results. By running generative AI on mobile handsets, costs could be lowered and services such as chatbots could be far cheaper for companies to run. Smaller, open-source models have made the technology more available to businesses wanting to use generative AI in their own services.

https://www.ft.com/content/6579591d-4469-4b28-81a2-64d1196b44ab
AI can help solve writer’s block, claims Pet Shop Boys’ Neil Tennant

Telegraph

23-05-16 07:00


Pet Shop Boys’ member, Neil Tennant, has said that artificial intelligence (AI) could be deployed as a tool by musicians suffering from writer’s block. Although concerns have been raised that machine learning could ultimately make human artists redundant, Tennant said AI offered some benefits, such as allowing songs to be completed more easily. He cited a demo of a bot conducted by the band’s manager’s 15-year-old daughter, which had produced a song in the band’s style. The pair also criticised the “ghettoisation” of children’s TV programmes on dedicated channels.

https://www.telegraph.co.uk/news/2023/05/16/pet-shop-boys-neil-tennant-ai-solve-writers-block/
This AI hoax should terrify woke journalists

Telegraph

23-05-16 07:00


The Irish Times published an opinion piece which was created and submitted by a pseudonymous prankster using AI. The column claimed that wearing fake tan is racist, and gained much traffic before being removed by the editors, who then apologised for not spotting the prank. The Times' situation is a valid excuse as the accusations in the column are consistent with the type of accusations made against white people by left-wing columnists at publications such as The Guardian.

https://www.telegraph.co.uk/columnists/2023/05/16/irish-times-ai-hoax-fake-tan-racist-woke/
We have put the world in danger with artificial intelligence, admits ChatGPT creator

Telegraph

23-05-16 20:22


Governments need to take urgent action to prevent the world being harmed by rogue AI, according to OpenAI CEO Sam Altman. Altman cited fears that programmers could accidentally create a superintelligence that could destroy humanity, adding: "If this technology goes wrong, it can go quite wrong." Altman suggested powerful AI algorithms should be licensed and audited, with the aim of preventing them from developing capabilities such as self-replication, and that an international body similar to the International Atomic Energy Agency could be created to manage compliance. Such measures should be followed by enhanced privacy rules, Altman said.

https://www.telegraph.co.uk/business/2023/05/16/chatgpt-creator-sam-altman-admits-world-in-danger/
Regulation ‘critical’ to curb risk posed by AI, warns boss of ChatGPT

The Independent

23-05-16 18:58


OpenAI CEO Sam Altman has told Congress that artificial intelligence (AI) should be regulated, with particular attention paid to possible election interference. During a Senate hearing in the US, Altman proposed the introduction of an agency responsible for providing licences for the most powerful AI. He said the technology is advancing at a rapid pace, and people are concerned about how it might change their lives. The Senate committee had been chaired by former presidential candidate Al Gore, and was examining the future impact of AI systems.

https://www.independent.co.uk/independentpremium/business/ai-regulation-sam-altman-congress-openai-chatgpt-b2340077.html
ChatGPT chief says artificial intelligence should be regulated by a U.S. or global agency

The Toronto Star

23-05-17 00:00


Sam Altman, CEO of OpenAI, the company behind the human-like chatbot ChatGPT, told a US Senate hearing that the creation of a U.S. or global agency that could provide licences for the most powerful AI systems would be vital to mitigating the risks of advancing AI. The agency would also be tasked with ensuring compliance with safety standards, and be able to withdraw licences. Altman's comments expanded on earlier concerns over cyber-cheating mechanisms, to reflect a wider range of concerns ranging from misinformation to breaching copyright protections and job dislocation. Altman's proposals echo initiatives under way in Europe.

https://www.thestar.com/business/2023/05/16/chatgpts-chief-to-testify-before-congress-as-concerns-grow-about-artificial-intelligence-risks.html
Biden Will Find That Breaking Up With China Is Hard to Do

Bloomberg

23-05-16 22:00


The Biden administration intends to prioritize a “precise, limited” approach to decoupling from China that will protect US interests in key areas while keeping the larger economic bonds between the two nations mostly intact. Washington will deploy financial sanctions and export controls to reshape the economic relationship if necessary, acting with key partners and allies if possible, but it will apply these tools narrowly to limit spillover into non-strategic areas. Some American lawmakers have criticized the Biden administration’s approach. China is also moving to reduce its vulnerability to US sanctions by establishing dominance in areas ranging from critical minerals to telecommunications.

https://www.bloomberg.com/opinion/articles/2023-05-16/biden-s-economic-decoupling-plan-with-china-won-t-work?srnd=next-china
ChatGPT creator ‘nervous’ about AI election manipulation

The Independent

23-05-17 09:26


Sam Altman, CEO of OpenAI, has told a US congressional hearing that election interference through the use of artificial intelligence (AI) needs to be regulated. Speaking to the Senate Judiciary Subcommittee on Privacy, Technology and the Law, Altman said AI-powered chatbots were a “significant area of concern”. He stressed the importance of “rules and guidelines” to protect the integrity of voting. The hearing saw lawmakers consider whether to implement a requirement for licensing of certain types of AI machines. Altman suggested that a threshold for licensing could be a model able to persuade or manipulate beliefs.

https://www.independent.co.uk/tech/chatgpt-sam-altman-ai-elections-b2340413.html
How the CEO behind ChatGPT won over Congress

CNN

23-05-17 15:36


A Senate subcommittee hearing this week on regulating artificial intelligence (AI) was striking because the executives from Open AI and IBM who testified managed to largely avoid hostile questioning and attacks on their corporate responsibility for managing the dangers of AI that have characterised previous appearances by tech executives to Congress. OpenAI CEO Sam Altman in particular charmed lawmakers by presenting a straight-talking and apolitical approach to the myriad of issues raised by AI. Altman was careful to emphasise that OpenAI’s aims are not to addict people to online content, nor to create tools that could startle, manipulate or misinform. The maturity and candidness he demonstrated clearly showed his passion about the subject and views as to why it is important to get it right. Consequently he may have manoeuvred his start-up into the influential position of being the 'go-to' firm with whom lawmakers might work to regulate, and lessen the potential risks associated with, AI specifically.

https://edition.cnn.com/2023/05/17/tech/sam-altman-congress/index.html
Navy may not always have ships on the sea, First Sea Lord says amid AI changes

Telegraph

23-05-17 21:01


The head of the UK's navy has called for the increased use of drones, artificial intelligence and other technologies as standard in the armed forces, in order to keep up with rapid advancements across the globe. Admiral Sir Ben Key has warned that the country can't afford to 'come second' in the modern arms race, particularly in response to the deployment of modern suspected Russian subs. The move comes at a time when budgets have been strained and facing cuts in the future, with plans for two amphibious assault ships having already been scrapped last year.

https://www.telegraph.co.uk/news/2023/05/17/navy-artificial-intelligence-sir-ben-key/
China Military’s Use of AI Raises Alarm for Congress, Ex-Google CEO

Bloomberg

23-05-17 19:53


A report has argued that China's embrace of artificial intelligence (AI) for its military means the US needs to redesign its military to respond to the threat. Produced by the Special Competitive Studies Project, headed by former Google CEO Eric Schmidt to speed up the adoption of AI in US defence establishment, the report argues that China’s 30-year effort to study US combat techniques will be greatly enhanced by the development of AI. The ratcheting up of AI usage by both countries has sparked concerns of an arms race, potentially exacerbating any diplomatic conflict. US concerns have included the risk of AI-enabled attacks against satellites in space and nuclear architecture, while Redwood City venture capitalist Vinod Khosla has argued that slowing down US AI research would benefit China. United Nations Secretary-General Antonio Guterres has called for a new international law on autonomy in weapons systems.

https://www.bloomberg.com/news/articles/2023-05-17/china-military-ai-use-raises-alarm-for-congress-ex-google-ceo?srnd=next-china
US Investors Putting China Tech Engagement ‘On Hold,’ Patrick Zhong Says

Bloomberg

23-05-17 19:19


US investors are calling quits on China technology investments because of tensions between the countries, according to Ventures Capitalist Patrick Zhong. He said that a great deal of investors and company CEOs no longer travel to China because of Covid-19 restrictions which has caused everyone to put it on hold because of geopolitical concerns. Before founding M31 Capital, which has heavy ties to China, Mr Zhong invested in companies such as Alibaba, Tencent and Baidu. While he is not concerned about China's technology regulation efforts, he said that he is optimistic entrepreneurs from different countries can work together in the future.

https://www.bloomberg.com/news/articles/2023-05-17/us-investors-putting-china-tech-engagement-on-hold-says-zhong?srnd=next-china
AI pioneer Yoshua Bengio says governments must move fast to ‘protect the public’

Financial Times

23-05-18 04:24


AI pioneer Yoshua Bengio, a 2018 Turing Award recipient and founder of Mila, the Quebec Artificial Intelligence Institute, has said governments must quickly intervene and “protect the public” as powerful AI machines such as OpenAI’s GPT grow increasingly common and accessible. Bengio noted that indiscriminate access to large language models without sufficient scrutiny could pose a threat to political systems, democracy and truth itself. Google and Microsoft-backed OpenAI have both launched revolutionary, generative AI products in recent months, prompting a global debate about the ethics and future of the technology.

https://www.ft.com/content/b4baa678-b389-4acf-9438-24ccbcd4f201