Economist Artificial Intelligent:-
Every day we hear reports that the Artificial Intelligence ( AI ) technologies are going to change the economy, cause mass unemployment and create massive monopoly power.
Economists have been researching the association between technological progress, efficiency and work with Adam Smith’s pin factory since the inception of the discipline. It should therefore not come as a surprise that AI devices capable of behaving properly in an increasing variety of situations-from driving vehicles to identifying lesions in medical scans-have caught their attention.
A group of respected economists convened in Toronto in September 2017 to set the Economics of Artificial Intelligence ( AI) study agenda. They discussed topics such as what is economically special about AI, what its impacts will be, and what strategies are correct to extend its benefits.
The main issues of the workshop and the related articles at four levels:-
Macro View: Effect of AI on aggregate economic variables such as growth, wages or inequality.
Meso View: Effect of AI on sectors such as scientific study or regulation
Micro View: Effect of AI on the actions of organisations and individuals
Meta View: Effect of AI on data and methods used by economists to research AI
The Economist ‘s View on AI:-
In previous work, Ajay Agrawal, Joshua Gans & Avi Goldfarb, the organisers of the conference , described AI systems as “prediction machines” that make predictions cheap and plentiful, allow organisations to make more and better decisions and automate some of them. One example of this is the Amazon Recommendation Engine, which delivers a customised version of the website to each tourist. This kind of customization would not be feasible without a machine learning algorithm (a form of AI) that eventually determines what items would be of interest to the particular customer with respect to behavioural data and other users that are close to it.
AI programmes should be implemented by any segment facing a forecasting problem-which is almost everywhere in the economy from agriculture to finance. This pervasive importance of AI has prompted some policymakers to acclaim it as the latest example of a transformational “General Purpose Technology” that would reshape the economy as a steam engine or a semiconductor did earlier in time.
The massive impact of dropping costs:-
While we look at artificial intelligence from the point of view of economics, we ask the same, single question that we ask for every technology: what makes it minimize the cost? Economists are fantastic at taking the fun and wizardry out of science and leaving us with this dry yet enlightening question. The response reveals why AI is so critical compared to many other trending areas. AI can be recast as triggering a decline in the cost of first-order inputs into many company operations and our lives — prediction.
We should look at the example of another technology, semiconductors, to consider the fundamental changes that arise as technology lowers the cost of a valuable product. Semiconductors have reduced the expense of math, and as they have done, three things have happened.
Firstly, We began using such inexpensive mathematics to resolve issues that were not usually framed as arithmetic problems. For example, we used chemistry (film-based photography) to solve the development of computational photographs. Then, as arithmetic became inexpensive, we started using arithmetic-based methods for camera architecture and image reproduction (digital cameras).
Second, What happens when the cost of arithmetic went down was that it shifted the value of other things — the value of arithmetic replacements went up and the value of its alternatives went down. Thus, in the case of photography, the complement is the software and hardware used for digital cameras. The value of these improved when we needed more of them, while the value of the replacements, the components of film-based cameras, went down when we began to use fewer and less of them.
Lastly, We continued using more mathematics for systems that have already leveraged arithmetic as input. In the 1960s, they were mainly government and army uses. After, we began to make more predictions for functions such as demand forecasting, so these predictions were both simpler and cheaper.
Usage of AI to study AI:-
AI algorithms have a great deal to add to economics studies that also aim to identify fundamental correlations in data. Susan Athey discussed these possibilities at the inaugural AI Economics conference, with a special emphasis on how machine learning can be used to improve current econometric approaches.
A few papers listed above have discussed different types of data and approaches along these lines, such as the use of large datasets from LinkedIn and Uber, and online tests to assess how Uber drivers respond to informative nudges. At Nesta, we evaluate open datasets using machine learning approaches for mapping AI analysis.
Future prospects for the AI Economics:-
#. Researching the course of AI ‘s imaginative move:-
Sustaining flexibility in the spectrum of technology being discussed may be helpful , particularly if we do not know what their pros and cons are. However, as Daron Acemoglu concluded in this paper in 2011, the industry would undersupply substitutes to dominant technology if researchers are unable to enjoy the advantages of maintaining technical diversity.
Much of the study discussed at the NBER conference followed a “monolithic” concept of AI that fits the dominant learning model that dominates the field today and neglects the drawbacks of this approach. However, as Gary Marcus has argued in recent work, other strategies could be required to make AI systems more stable and appropriate for high-level domains such as health.
#. Failure to model AI:-
Macro analyses of the effect of AI conclude that AI can improve efficiency as long as companies make the requisite incremental investments. They pay no attention to current AI problems such as algorithmic exploitation, bias and mistake, worker refusal to comply with AI guidelines, or intelligence asymmetries in AI markets. These factors may minimise the effect of AI on competitiveness (make it mediocre and hence mostly labour displacement), increase the need to invest in new complements such as AI monitoring and moderation, obstruct trade in potentially suspect AI goods and services, and have major distributional consequences , for example by algorithmic discrimination against vulnerable communities.
#. Progress in Designing AI:-
Future study might solve these gaps by creating structured AI progress models through an AI development function that uses data, tools, computing resources and skilled labour to generate AI systems. In this article, Miles Brundage began describing qualitatively what the model would look like. This model could be operationalized using transparent and web-based data and AI calculation projects from the Electronic Frontier Foundation (EFF) and the Papers with Code project to research the nature, distribution and competitiveness of the AI industry and how it supplies AI technology and information to other industries. Recent work by Felten, Raj and Seamans, using EFF metrics to connect developments in AI technology with employment at risk of automation, shows how this kind of research could help predict the economic impacts of AI development and advise policy.
In general, the analysis discussed at the AI Economics Conference modelled AI as an economic crisis to the economy, in some cases specifically including Daniel Rock ‘s review of the effect of TensorFlow’s arrest on the market valuation of companies. However, AI development is itself an economic mechanism, the study of which should be part of the AI Economics agenda.
#. Remembering AI’s Political Economy:-
At the seminal AI Economics Meeting, Tratjenberg and Korinek and Stiglitzasked who will benefit and who will lose when AI arrives, if AI implementation might become politically inappropriate, and what measures should be placed in place to minimise AI’s societal costs. More recently, Daron Acemoglu and Pascual Restrepo expressed fear that the AI industry may be constructing “the wrong kind of AI” because it does not take into consideration the indirect effects of AI (for example , in terms of labour market disruption) and because some of its representatives are biassing in favour of mass automation regardless of its downsides. These critical concerns were mostly absent from the Toronto discussion, but economists need to formalise and operationalize models of the distributional impacts of AI and its externalities in order to advise policymakers to ensure that its economic effects are broadly spread and minimise the likelihood of public retaliation against it.
In several other words, the prospect of AI in the economy will be more like the Internet than Skynet: it will be complex. Prediction machines not only increase the number of decisions that we are able to make on the basis of AI advice, but also the number of decisions that we need to make, as part of the economy and as a community, on what AI developments to create, where to implement and how to handle their impacts. As the timely debates at the recent AI Economics conference have shown, some of the finest economists in the world are working hard to produce hypotheses and facts to support these judgments.