top of page

Group

Public·15 members
Yemelyan Fomichev
Yemelyan Fomichev

Artificial Intelligence And Machine Learning For Business: A No-Nonsense Guide To Data Driven Techno


This article provides a comprehensive overview of the main ethical issues related to the impact of Artificial Intelligence (AI) on human society. AI is the use of machines to do things that would normally require human intelligence. In many areas of human life, AI has rapidly and significantly affected human society and the ways we interact with each other. It will continue to do so. Along the way, AI has presented substantial ethical and socio-political challenges that call for a thorough philosophical and ethical analysis. Its social impact should be studied so as to avoid any negative repercussions. AI systems are becoming more and more autonomous, apparently rational, and intelligent. This comprehensive development gives rise to numerous issues. In addition to the potential harm and impact of AI technologies on our privacy, other concerns include their moral and legal status (including moral and legal rights), their possible moral agency and patienthood, and issues related to their possible personhood and even dignity. It is common, however, to distinguish the following issues as of utmost significance with respect to AI and its relation to human society, according to three different time periods: (1) short-term (early 21st century): autonomous systems (transportation, weapons), machine bias in law, privacy and surveillance, the black box problem and AI decision-making; (2) mid-term (from the 2040s to the end of the century): AI governance, confirming the moral and legal status of intelligent machines (artificial moral agents), human-machine interaction, mass automation; (3) long-term (starting with the 2100s): technological singularity, mass unemployment, space colonisation.




Artificial Intelligence and Machine Learning for Business: A No-Nonsense Guide to Data Driven Techno



I've been thinking a lot lately about this storytelling that we speakers do -- it's part of what I call the "ed-tech imaginary." This includes the stories we invent to explain the necessity of technology, the promises of technology; the stories we use to describe how we got here and where we are headed. And despite all the talk about our being "data-driven," about the rigors of "learning sciences" and the like, much of the ed-tech imaginary is quite fanciful. Wizard of Oz pay-no-attention-to-the-man-behind-the-curtain kinds of stuff.


This is my great concern with much of technology, particularly education technology: not that "artificial intelligence" will in fact surpass what humans can think or do; not that it will enhance what humans can know; but rather that humans -- intellectually, emotionally, occupationally -- will be reduced to machines. We already see this when we talk on the phone with customer support; we see this in Amazon warehouses; and we see this in adaptive learning software. Humans being bent towards the machine.


Data Science, and particularly its related machine learning discipline has brought the world astonishing results. We have seen machine learning developing from recognizing a cat on a picture to generating the next Rembrandt [1]. Recent advances on Deep Learning and Deep Generative Adversarial Networks are currently being used to developing new medicines for curing cancer [2]. All those results seemingly point a future where data driven scientific discoveries are the way forward [3]. While this may be appealing to data scientists, I believe that there are fundamental limitations of using solely data for solving problems [4].


About

Welcome to the group! You can connect with other members, ge...

Members

bottom of page