Ian Hogarth, the head of the UK government's AI taskforce, highlights the challenge of safeguarding British jobs as AI systems become more advanced. While acknowledging that job automation is inevitable and emphasizes the need for a global rethinking of work patterns.
As Artificial Intelligence systems advance, Ian Hogarth, the new head of the United Kingdom government's AI task force, has expressed concerns about the challenge of safeguarding British jobs. Speaking to the British Broadcasting Corporation, Hogarth said he believed that increased automation will inevitably lead to some job losses globally.
Hogarth emphasized the need for the world to rethink how people work, acknowledging that winners and losers will emerge in terms of job opportunities due to AI's impact. Already, reports of job losses have surfaced as companies opt for AI tools over human employees.
While some fear job displacement, others believe that AI will create new job opportunities, just as the internet did in the past. Goldman Sachs' report indicates that 60 per cent of current jobs did not exist in 1940.
Prime Minister Rishi Sunak is prioritizing AI, aiming for the country to become a global hub for the sector. The main goal of the UK's new AI taskforce is to help the government comprehend the risks posed by frontier AI systems and hold companies accountable. However, positioning the UK as a key player in this rapidly advancing field poses several challenges.
Hogarth is particularly concerned about potential harms, such as wrongful arrests through AI in law enforcement or increased cybercrime due to malicious computer code.
Despite these risks, Hogarth recognized the significant benefits AI brings, especially in healthcare. AI tools are making strides in identifying new antibiotics, aiding those with brain damage in regaining movement, and spotting early disease symptoms.
AI and the world
Governments worldwide are grappling with the regulation of AI. The European Parliament recently voted in favour of the EU's proposed Artificial Intelligence Act, which aims to establish a stringent legal framework for AI, obliging companies to adhere to its guidelines.
Margrethe Vestager, the EU's competition chief, mentioned a forthcoming legislation, scheduled for implementation in 2025, that will categorize AI applications based on their risk levels to consumers. Low-risk AI applications, such as AI-enabled video games or spam filters, will fall under the least stringent category. On the other hand, high-risk systems, like those used for credit score evaluation or determining housing access, will be subjected to the strictest controls.
In contrast, the UK has taken a different approach. In March, the government presented its vision for the future of AI, ruling out the establishment of a dedicated AI regulator. Instead, it stipulated that existing bodies would be responsible for overseeing AI. As a result, the EU's AI regulations will not apply to the UK.