Human Dignity in the Era of Artificial Intelligence and Robotics: Issues and Prospects

| ABSTRACT One of the most disruptive innovations of this century is the emergence, growth and development of Artificial Intelligence and Robotics. This paper seeks to highlight the prospects and unearth the issues inherent in the emergent field of AI and robotics on human dignity. Despite the substantial evidence supporting the advantages of AI and Robotics, researchers, industry experts, academics, and individuals seem to have different opinions about the future of AI and robotics. While some believe these fantastic innovations have boundless potential and significant advantages, others are worried about the impending "rise of the machines", its impact on human dignity and the annihilation of humanity. Leveraging a critical analysis of extant literature and document analysis, this paper highlights the key benefits of AI and salient ethical considerations. The study contends that though we cannot stop the development and advancement in AI as it has ushered in tremendous improvements to life and living of the human person as has been witnessed in diverse sectors, there is an urgent need to design and institute a sound regulatory framework guiding further development and application of Artificial Intelligent systems and robotics, aimed at assuring human existence and respecting the dignity of the human person.

Since the 1960s, discussions on AI have taken the form of a pendulum, swinging from positives to disappointments to periods of optimism and pessimism. However, with the massive investments in this area by corporations such as Google, Dropbox, Facebook and other online retail chains, and the launch of innovative applications such as IBM's Watson and IPSoft's Amelia, one can conveniently say that the evolution in AI and Robotics will continue into the far future. As the convergence of computer intelligence and applications for business processes accelerates, the industry is developing a new class of intelligent automation with capabilities to perform activities that hitherto required high cognitive skills and a high level of trained workers (KPMG, 2016). The fear, therefore, is that these intelligent robots could replace more than 100 million trained workers by 2025 (Manyika, Chui, Bughin, Bisson, & Marrs, 2013), from clerical and administrative tasks to sales and technology. This might include job categories that demand higher cognitive levels like decision-making, judgement and interactions.
General Motors in 1961 introduced the use of robots in its Detroit assembly plant, and this heralded the use of robots in diverse areas of manufacturing. Since that time, robots have been central to improvements in productivity and scaling of business operations.
"At least 40% of all businesses will die in the next ten years… if they don't figure out how to change their entire company to accommodate new technologies." -John Chambers (Osinubi, 2018).
Traditionally, certain business activities such as contending with customer queries, administrative services, carrying out clinical research in hospitals, and managing certain aspects of legal and financial services require a human person who utilises cognitive skills and situational analysis to make a decision and to take action. However, evolution and advancements in cognitive technology and automation have reduced the barrier posed by this human requirement. Consequently, humanity does not view technology from the prism of potential economic impact but also from the capacity to disrupt. And these two often go hand in hand. Joseph Schumpeter, a 20 th -century economist, observed that the most significant advances in economies emerge through "creative destruction", which impacts profit pools, distorts industry structures, and replaces existing businesses (Manyika, Chui, Bughin, Bisson, & Marrs, 2013). This process is almost always triggered by entrepreneurs and driven by technological innovations.
AI and Robotics represent machines having the capacity for reflection and the ability to react to external stimuli in a way consistent with how a typical human being would respond to the same stimuli. Shubhendu and Vijay (2013) aver that these computer systems make decisions that hitherto were the prerogative of humans and thus require human-level expertise and assist humans in anticipating challenges and dealing with issues as they emerge. One can therefore state that these systems operate in an intentional, intelligent and adaptive mode (Shubhendu & Vijay, 2013).

Conceptualising Human Dignity: Who is Man and What is Man?
Everything on earth achieves its true meaning when viewed from the human standpoint, thus making man the foundation of existence. To understand the concerns pertaining to man's dignity, one must first understand the fundamental nature of man: who man is and what man is. To a large extent, this understanding of the concept of man can be fully understood from a philosophical perspective. This is true because, contrary to what biologists contend, philosophy accurately captures the very nature of man and what it means to be a human being. Thus, the wholeness and fullness of human existence can only be examined and explained through the philosophy of who the man is. As an intelligent and emotional being, man thrives in collective freedom and attains fulfilment through acts of will. Although his emotional makeup is comparable to that of animals, his intelligence, cognitive skills, reason, and autonomy set him apart as a unique being. Man can comprehend abstract universal concepts like loyalty and trustworthiness and immaterial realities like fortitude, justice and truth.
The best explanation for who man is came from the Greek philosopher Boethius, who stated that "man is a person, an individual substance of rational nature,". In essence, man is a spiritual being with a logical, intellectual soul and essence. In affirmation, Aristotle posits that man is a rational being with adaptive behaviour, behavioural regulation, reflectivity, and the ability to make wise decisions.
Furthermore, the essence of man can be used to describe who man is. This relates to his 'being', which makes a man a person. Not being a 'thing' arrogates to man that essential dignity. The awareness of his dignity has many significant consequences because a person's essential sense, which is innate and the foundation of his human right, determines his dignity rather than his accomplishments or intelligence, which are dependent on activities or intellect. Only humans have the right to be treated with dignity, which according to Nordenfelt & Edgar (2005), comes in four dimensions, namely: 1. Dignity of merit (this emanates from a person's formal and social status) 2. Dignity of moral stature (this is tied to one's ethical behaviour) 3. Dignity of identity (linked to the integrity of one's body and mind, and one's self-perception) and, 4. Essential dignity.
At the core of most scholarly discourse on human dignity is man's essential dignity, which can be presented in one of these diverse contexts: cosmological, anthropological, philosophical, religious, and political (Riley & Bos, 2016).
Political attempts to describe, reevaluate, and justify the dignity of human beings through the constitution and the law have influenced how human dignity is viewed in our society. For instance, the United States Supreme Court adopted the term "human right" in its debate on human dignity in the First, Fourth and Fifth Amendments and the Sixth, Eighth and Fourteenth Amendments to their Constitution (Kilner, 2003). This debate is evident in the widespread demand for rights to specific groups, such as the LGBTQs, as they solicit for social inclusion. Politicians are occasionally pressured into enacting laws in support of these people to uphold human dignity. Thus, from a political perspective, human dignity correlates with civil rights in our society. However, by legalising the idea that the nature of humanity must primarily be recognised and respected, Steinman (2016) contends that dignity encapsulates the essence of what constitutes a human being.
The discussion above highlights two fundamental ideas: respect for and the right of man. The intrinsic or essential dignity of man entails respecting man for the sake of his existence and not for any unique attributes or characteristics. This represents the aspect of man's dignity that this paper intends to examine in the context of AI and robotics.

Kant and the Philosophical Perspectives on Human Dignity
In his lecture on ethics, Kant asserts that a person's worth "is above all price." According to Kant, because humans are autonomous beings, they should be seen as ends in themselves and treated as beings with value and dignity. By interpretation and application to the context of this paper, the value of AI should be inherent in the benefits human beings derive from using these systems. They (humans) are the beings who place value on inanimate objects, and they are the beings whose moral value may be judged by the deeds they commit. Furthermore, according to Kant's theory of dignity, people should not be treated like objects or have their free will disregarded. Even while inherent dignity implies individuality, autonomy and rationality, it does not assess the value of personal judgments or reasoning. Instead, the idea of intrinsic worth proposes that everyone has value simply by being human, regardless of their decisions.
In conclusion, Kant holds that human beings must have an absolute value that cannot be weighed against the worth of anything else. Therefore, AI and Robotics are created to serve the needs of man and, thus, should not be accorded values that equate them to the human person. According to Kantian Theory, any invention or breakthrough in AI and robotics that acts outside of this violates the dignity of man.
Used extensively in health and legal discourse, human dignity is a prerequisite to guaranteeing human self-respect. It is the honour and respect accorded to a person just for who the man is, not for what they are capable of. Thus, the concept of "who man is" is more important rather than "what man is". Man's worth derives from his spiritual being, essence, and soul. It is a notion representing a legal standing that establishes the framework for establishing constitutional texts and the international adjudication of human rights.
McCrudden (2008) asserts that in the context of human rights, human dignity does not offer a generalised, ethical foundation for judicial decision-making. The study went further to accept the possibility of elucidating a minimum definition of human dignity that has been adopted globally. It incorporated three components from Neuman's (2000) work to provide what it referred to as "the minimum care" of human dignity. These are: i.
That every human being possesses an intrinsic worth by merely being human. ii.
That people should acknowledge and value this intrinsic worth. iii.
That a state exists for individual human beings but not otherwise.
Consequently, human dignity is compared to the intrinsic value that people have that comes from simply existing as people (Schroeder & Bani-Sadr, 2017). Human dignity, then, is a timeless and eternal virtue that characterises human nature and is adaptive to a constantly changing conception of what man is. It establishes links between man and his "self," others, his immediate surroundings, and the universe in which he lives. This brings up how people interact with robots and how it affects their feeling of autonomy, privacy, and self-worth. Because of the development in AI that led to robotic interaction with humans, we may need to reconsider what dignity means and devise strategies on how to adapt to these emerging concepts and regulate innovations in the field of AI and robotics.

Prospects and Advancements in AI and Robotics in Select Sectors
There abounds evidence of phenomenal growth in the area of AI and robotics aimed at enhancing productivity in nearly every sphere of human endeavour, from healthcare to finance, security, education, transportation and even law and criminal justice. Recent advancements in this field have captured the interest of academia, industry, and the entire human society. Robotics and AI have been proven to significantly amplify human potential by enhancing productivity and have evolved from uncomplicated reasoning to human-like cognitive abilities.

Healthcare
In recent times, there have been marked developments in medicine, with AI driving improvements in the accuracy and efficiency of diagnosis and treatment. With inventions like the Baymax and tricorder, which are now present in homes, offices, and clinical settings all over the world and support, diagnose, and treat people, the question arises as to whether AI-based systems will someday replace doctors or supplement their work.
AI tools have also been a critical factor in raising computational involution in healthcare. Merantix, a German corporation that uses computer tomography to apply deep learning to medical problems, has created a medical imaging application that can find lymph nodes in the human body (Rothe, 2017). Though this can be handled by a radiologist for a US$100 per hour fee, with the ability to read about four images in one hour, the cost of reading 10,000 images will be an astronomical US$250,000. In this scenario, deep learning can train computers on data sets to discern between regular and irregular lymph nodes and thus make the exercise faster and cheaper (West & Allen, 2018). AI has also been applied in congestive heart failure -a health challenge that costs the United States about US$35 billion annually. With its ability to predict potential challenges and allocate resources that help patients understand, sense and carry out preventive actions, AI reduces and minimises the incidence of this health challenge (Horvitz, 2016). It has been demonstrated by the COVID-19 pandemic that cutting-edge artificial intelligence technology, including intelligent machines and robots, can be helpful as a possible solution in containing the devastating effects of the virus. UVD Robots were created as decontamination robots and can be utilised to destroy the Covid-19 virus in industrial settings.

Education
In recent years, AI technologies have attracted the interest of academics and industry practitioners (Kaur, 2021). They are seen as a contemporary method of teaching and learning with the potential to address diverse learning challenges.
In the education sector of the US economy, the AI market size is expected to increase by US$253.82 million between 2021 -2025, with the market's growth momentum accelerating at a CAGR of 49% (Technavio, 2021). AI is instrumental in addressing issues with content accessibility and teacher deficiency and thus makes learning seamless and stress-free. (Ahmad, Rahmat, Mubarik, Alam, & Hyder, 2021). Aside from its impact on tutoring systems, innovative learning, and social robots, it plays a significant role in virtual facilitating, online learning ecosystems, learning management systems and analytics.
Advancements in AI in the education sector have been fast-tracked by the Covid-19 pandemic, which has triggered a forced lockdown on humanity. Faced with a global pandemic, AI played a critical role in providing access to learning resources and led to improved communication between students and teachers. It has also led to personalised learning, where education depends on the knowledge level of the students, their learning speed, and their overarching objectives (Ahmad, Rahmat, Mubarik, Alam, & Hyder, 2021). With AI, the learning history of individual students can be analysed to assess the strengths and weaknesses of the student and recommend ways for improvement. Furthermore, AI has addressed the challenge of timely response. This is made possible by its ability to answer repetitive and commonly asked questions in milliseconds, thus overcoming the frustration of delays in feedback. However, the most significant contribution of AI to the education sector lies in its universality. With access to the internet and relevant technology, students can access educational services worldwide.

Criminal Justice
Studies and statements by experts in criminal justice assert that AI programs can reduce the incidence of human bias in law enforcement and thus lead to a fairer sentencing system in the judiciary (West & Allen, 2018). Security agencies, organisations and governments are attempting to leverage AI in detecting and preventing crime. By filtering vast amounts of data to identify functional legal patterns, crimes can be predicted (Wickramarathna & Edirisuriya, 2022). This was the dream of many governments a few decades ago, which today is becoming a reality. Today, AI has proven its indispensability in the justice ecosystem by aiding in investigations and allowing professionals in the justice sector to maintain public safety better.
In Chicago, for instance, AI has been deployed in developing a "strategic subject list" which analyses individuals with a history of crime to predict the chances of repeat criminal activity. By using specific indices such as age, criminal activity, gang affiliation, drug arrests records and others, the system can rank over 400,000 individuals on a scale of 0 -500 on their propensity for crime. Other countries, such as China, have garnered considerable data on voices, faces, and other biometric data, which helps develop the criminal justice system (Mozur & Bradsher, 2017). These data could be combined with other forms of information to garner insights into improving the criminal justice system. Using its "Sharp Eye" program, the Chinese government can match the voice, face, video images, social media activities, travel history and online purchases and upload them to the police cloud to enhance the criminal justice system (Denyer, 2018).

Cyber Security
According to many security analysts, security-related incidents reached an all-time high in 2019 (Pupillo, Fantin, Ferreira, & Polito, 2021). From ransomware to phishing, from the dark web to attacks on civil infrastructure, these offensives have been on a steady increase and gradually getting more sophisticated in recent years. For instance, the volume of recorded threats from malware was about 688 threats per minute on average. This represents an increase of 40 threats per minute (3%) within the first quarter of 2021 (McAfee, 2021). This can be attributed to cyber criminals who took advantage of the Covid-19 pandemic and the growing dependency on digital media by individuals and corporate bodies.
According to a statement by Interpol, 907,000 spam messages related to the pandemic were detected between January and April 2020. (Pupillo, Fantin, Ferreira, & Polito, 2021). The proliferation of zero-second cyber threats and diverse polymorphic malware will thus challenge the most sophisticated protection against cyber criminals. To address this challenge, developments in cyber security will have to keep evolving, and this is where Ai comes in. Organisations are leveraging AI to manage this menace by enhancing the robustness of their systems, resilience, and response lead time. In this scenario, AI systems, via a synergistic integration, will assist security analysts in improving the speed of operations that downplays the competition between man and machines.

Finance
The digitalisation of society has necessitated the adoption of AI in the finance sector of world economies. The centrality of Artificial Intelligence was brought to the fore with the outbreak of the Covid-19 pandemic, with global spending on AI projected to hit US$110 billion by 2024 (OECD, 2021). This growth can be attributed to spending in critical areas such as credit underwriting, algorithmic trading, asset management and blockchain-based financial services and leverages abundant data and an increase in affordable computing capacity.
For instance, using AI, decisions on loans can be made using software that utilises a variety of data from the borrower instead of simply depending on background checks and credit scores (Popper, 2016). Additionally, "Robo-advisers" can be used to create personalised investment portfolios and thus bypass the need for stockbrokers and financial advisers and the attendant cost. The implication, therefore, is that emotions are left out in decision-making, making way for analytical considerations which can happen in minutes. Today, machines can identify and isolate minute trading inefficiencies or market differentials and conduct profitable trading activities based on customer instructions.

Transportation
The transportation sector represents a critical area where AI is driving significant innovations. This is evident in the emergent area of Intelligent Transport Systems, which incorporate public transport, traffic management, safety management, and the related manufacturing and logistics where AI systems can be used (Shankarlyer, 2021). Autonomous vehicles and drone delivery systems utilise advanced technological proficiency. These include lane-changing systems, automated vehicle guidance, braking, cameras and sensors to avoid collision, real-time traffic information analysis, detailed maps and many others (West, 2016). Of great importance also are the light detection and ranging systems (LIDARs), which work with AI to facilitate navigation and prevent a crash. This is achieved by using sensors mounted on all sides of vehicles that convey information that restricts fast-moving vehicles within a specific lane and directs them away from other vehicles.
The considerable amount of data these sensors collect makes it imperative to have sophisticated algorithms, high-performance computing, and deep learning systems, enabling vehicles to adapt seamlessly to new scenarios. Consequently, these autonomous vehicles can learn from the experience of other related vehicles and use this experience to adjust their systems in relation to weather, road or driving conditions (West & Allen, 2018). In advanced economies, ride-sharing businesses like Uber, Hailo, Didi, Mytaxi and Lyft are exploring using autonomous cars. Their interest demonstrates the opportunities inherent in this technology.

National Security
In many advanced nations, AI has emerged as a crucial component of national defence. For instance, in the United States, Project Maven, driven by AI, is focused on sifting through vast troves of data in the form of video and texts which were captured via surveillance. These are analysed for patterns and alert the government of suspicious or abnormal activity (Davenport, 2017). The ability of AI to sort through huge amounts of data in real time will provide military institutions with Intelligence that hitherto would have been thought impossible. In warfare, speed is of the essence: the party that decides fast and makes the first move will most likely prevail. Consequently, AI, with its capacity for speed, can facilitate split-second decisions superior to traditional methods of waging war. The speed of the process and the attendant automation have led to the coining of a new term: hyper war (West & Allen, 2018).

Ethical Concerns on Human Dignity in the Context of AI and Robotics
From the preceding, evidence abounds on the importance of AI and Robotics in the growth and development of our society. However, the advancements in this sector have raised salient concerns about ethics and human dignity. Some of these concerns will be discussed here.

Invasion of Privacy and Surveillance
The evolution of AI and Robotics has elicited profound discourse on human privacy and uncensored surveillance (Macnish, 2017;Roessler, 2017). These discussions focus primarily on access to sensitive information and personally identifiable data. Privacy, in this context, relates to the entitlement to being left alone, control over personal data, and the right to confidentiality and privacy in personhood (Bennett & Raab, 2006). With the rise in AI and Robotics, it is becoming more difficult to control who collects what information and who gets access to that information. For instance, face recognition in images and videos facilitates the identification of people and the profiling and looking for them across diverse search engines (Whittaker, et al., 2018). Other methods of identification, such as "device fingerprinting," which is prevalent on the internet, are also widely used to amass personal information leading to a disturbingly full image of ourselves across diverse digital networks (Smolan, 2016). The data we leave behind pays for our "free" internet services! Unfortunately, we are not privy to the data being gathered or the value of this new "raw material", as we are continuously coerced into leaving more of our personal information behind as we scour the diverse digital media platforms. Thus, the primary data collection method for these organisations seems to be built on deceit, exploiting human flaws, developing addiction, and manipulation.
These companies' primary method of data gathering seems to be built on dishonesty, taking advantage of people's weaknesses, creating addictions, and manipulation. This appears to be true with the big five digital organisations: Amazon, Google/Alphabet, Microsoft, Apple, and Facebook. Thus, social media, video games, and a large portion of the internet all have the primary objective of gaining, retaining, and directing attention through the supply of data in this "surveillance economy." These systems frequently reveal truths about us that we prefer to keep hidden or are unaware of. According to Harari (2016), what will happen to society, politics, and daily life when non-conscious but smart algorithms understand us far better than we do?

Behavioural Manipulation of the Human Person
The ethical concerns that emanate from AI in surveillance go beyond data gathering and redirection of attention. They also encompass the use of data to sway online and offline behaviour in a way that interferes with human autonomous rational decisions (Muller, 2021). As a result of the relationship between people and data systems and the rich knowledge of humans such activity provides, individuals become susceptible to deception and manipulation. By leveraging the deep understanding of internet users, sophisticated algorithms can be used to provide precisely the information most likely to influence small groups or targeted individuals.
The practice is becoming popular in advertising, marketing, and sales, where practitioners will employ all available legal means to exploit biases in behaviour, deceit, and addiction development to maximise profit (Costa & Halpern, 2019). This business model is adopted in diverse industries such as gambling, gaming, and low-cost airlines and is often called "dark patterns" (Muller, 2021). The challenge is that though the sale of addictive substances and gambling are strictly restricted, online manipulation of individuals is not. This practice is also evident in political propaganda, where it is used to manipulate voting behaviour and thus seriously harm the autonomy of the human person.

Opacity in AI Systems
Central to what we now call data ethics are opacity and bias (Floridi & Taddeo, 2016). Severe concerns exist regarding the lack of due process, accountability, community involvement, and auditing when using AI systems for automated decisions and predictive analytics (Whittaker, et al., 2018). They constitute a power structure within which we design decision-making processes that inhibit the possibilities of human participation to the extent that the human person is not privy to the process through which decisions are reached. This makes the AI system opaque by design and thus poses a challenge to identifying patterns and their definition. This opacity increases bias in data sets and decision-making algorithms.
In simple terms, the outcome is not transparent even to the programmers. In addition, the outcome is incredibly reliant on the quality of the data provided, such that if the data is fraught with bias, the result will incorporate that bias. Kissinger (2018) contends that reliance on technology that is claimed to be superior to people creates a fundamental issue for decision-making, especially when the system cannot explain its decision.

Bias in Data Analysis and Decision-Making Systems
Predictive analytics and automated AI, decision support systems, analyse data and generate a decision as output. However, such output could be anything from highly substantial to moderately insignificant. Examples include decisions such as "your application for a bank loan has been turned down", "the applicant is ineligible for free medical care", "candidate is not qualified for a scholarship", and so on. In business, healthcare, and other industries, data analysis is commonly used to predict future developments.
This is evident in the recent trend referred to as "predictive policing", which many fear might negatively impact civil liberties (Muller, 2021). Concerns over predictive policing stem from the fact that it leverages futuristic scenarios where law enforcement anticipates and punishes intended conduct against waiting for crimes to be committed before acting. The belief is that these systems might perpetuate bias inherent in the data utilised in designing the system.
For instance, the Chinese government uses advanced AI algorithms to classify people according to social traits. This social credit system aids the government's data collection on its residents and assigns each one a score based on how trustworthy the government deems them. Additionally, the system has a penal design that humiliates "debtors" by putting their faces on big screens in public places or preventing them from utilising specific social services. However, such a system can discriminate against marginalised groups by categorising them based on specified physical characteristics.

Digital Automation and Employment
The industrial revolution brought with it an unprecedented level of productivity across diverse industry sectors. Many manufacturing companies are turning their attention to automation because of the boom in digital technologies. However, automation inevitably results in the need for fewer workers to produce the same output. Across the ages, there have been significant changes in the labour market. For instance, in North America and Europe, farming employed more than 60% of the workforce in 1800. However, by 2010, it only accounted for 5% of the workforce in the European Union and even much less in other prosperous economies (European Commission, 2013).
Whereas traditional automation substitutes human muscle, digital automation supplants the human brain and information processing. Incidentally, digital automation is cheap and easier to replicate when compared to physical machines and will most likely lead to drastic changes in the labour market. More disturbing is the concern about whether AI development is environmentally sustainable. This has been an intriguing subject that has not gotten the attention it deserves. This is because AI systems, like other computing systems, produce waste that is very hard to recycle.

Autonomous Systems
In diverse philosophical debates on autonomous systems, autonomy is the basis for accountability and personhood (Christman, 2018). In comparison to human control, a system can be seen to be relatively autonomous (Muller, 2012). There exists a relationship between opacity and bias in AI and autonomy. This is because they all entail a power relationship: who controls and who is responsible? Good examples of autonomous systems are autonomous vehicles and weapons.
Autonomous vehicles have the potential to lessen significantly the severe harm that human driving may cause. Concerns appear to exist about the operations of autonomous vehicles as well as how risk and accountability should be distributed in the complex environment in which they operate. The more common driving-related ethical issues, such as speeding, unsafe overtaking, and failing to maintain a reasonable distance, are classic examples of putting one's own interests ahead of the welfare of others. Conversely, the responsibility inherent in driving that used to be solely the domain of the human person is now shared by the companies that produce and operate the technological systems in automated driving situations and those in charge of decisionmaking on infrastructure, policy and law. In the same vein, autonomous weapons systems can deploy weapons on sea, air and land with capabilities for complex reconnaissance and attack missions. The primary ethical explanation against deadly autonomous weapon systems is that they encourage extrajudicial killings, absolve humans of accountability, and increase the likelihood of wars (Lin, Bekey, & Abney, 2008).

Human-Robot Interaction
Human-robot interaction (HRI) is an evolving and distinct academic subject that places a high priority on ethical issues, the dynamics of perception from both sides and the complexity of the social setting in which they relate. Interest in the human-robot relationship has been captured in various studies (Calo, Froomkin, & Kerr, 2016; Royakkers & van Est, 2016; Tzafestas, 2016) and centres on the potential of AI systems to deceive people into believing and acting in specific ways, endanger human dignity, and contravene the Kantian principle of "respect for humanity" (Muller, 2021). There are glaring differences between human-human and human-robot relationships in certain aspects of human activities, such as love, care, and sex.
For instance, using robots in human healthcare is still at the conceptual stage in real-world settings. However, it may become a practical technology in the near future, raising a lot of worries about a dystopian future of dehumanised care (Sharkey & Sharkey, 2011;Sparrow, 2007). Existing systems include robots that help human caregivers lift patients or move items, as well as robots that assist patients in completing specific assignments on their own (e.g., eating with a robotic arm), as well as robots that offer company and comfort to patients and the elderly. However, these robots care for humans strictly from a deontological sense of task execution and not necessarily in the sense of humans "caring" for patients. Furthermore, given the evolving taste of humans for sex toys and sex dolls, it can be assumed that interest in sex robots is likely to increase. It is debatable whether such devices should be created and promoted and whether restrictions should exist in this contentious field. However, a critical issue concerning sexual relationships with androids is consent (Frank & Nyholm, 2017), and the concern that specific experiences can "corrupt" the human person.

Artificial Moral Agents
As rational beings, humans are referred to as moral agents. However, if one believes that machine ethics concerns moral agents in a meaningful way, then they can be referred to as "artificial moral agents," possessing rights and obligations (Muller, 2021). As posited by compatibilist theorists, the human decision-making process is deterministic. Thus, so long as those human activities are decided upon after careful consideration of the likely consequences, we hold people morally accountable for their deeds. This process is comparable to the decision-making in AI and robotics. This is because Ai agents assess the likely outcomes of different decisions and select the best course of action based on their programming and the information/data available to it.
The critical question is whether we can treat AI agents as moral agents. Will robots and AI systems be held accountable, culpable, or responsible if they take any action? Conversely, arguments exist as to whether robots are entitled to rights or not (Bryson, 2010;Gunkel, 2018;Turner, 2019). If such rights exist, will it be morally right to destroy them by switching them off?

Conclusion: Upholding Human Dignity in the Era of Artificial Intelligence and Robotics.
Man, by creation, possess existential dignity, which makes him a moral agent -one who has the capacity to do good, discern good from bad and be held accountable for actions taken. Across every society, the recognition of the actual value of the human person is at the heart of the notion of human dignity. However, if it is perceived that some practices might disrespect this human worth, then it may be necessary to safeguard the human person from such. Additionally, the inability to anticipate such inevitable repercussions could harm human dignity (Zardiashvili & Villaronga, 2020).
While the human person continues to experience the astronomical rate and rising level of sophistication inherent in AI, a concern that remains when AI-related topics are discussed is how it impacts the sanctity of human life. It's also possible that AI will one day be employed, unintentionally or not, to denigrate the worth of human life. According to the technological singularity theory, sentient machines may eventually replace humans as the predominant force on the planet. Weizenbaum (1976) cautioned against using AI to replace people in specific roles that call for deference and consideration because these roles-such as those of customer service representatives, therapists, police officers, soldiers, and judges -require empathy, which is a skill that machines are most likely never going to be able to imitate. He was of the view that if machines were to take over jobs that required empathy from people, it would threaten human dignity since it would cause people to feel alienated, undervalued, and frustrated (Kinos, 2019).
Unfortunately, the advancements in AI and robotics cannot be stopped in their entirety because it has proven to be beneficial to human life and human persona, as has been witnessed in education, cyber security, national security, and health. This dilemma can be addressed by a sound regulatory framework guiding the development and use of AI and robotics and aimed at sustaining and respecting the dignity of the human person.
Statements and Declarations: I confirm that this work is original and has not been published elsewhere, nor is it currently under consideration for publication elsewhere. The study was fully funded by the author, and there are no conflicts of interest to declare. Ethics approval and consent to participate: Not Applicable. Consent for publication: I consent that this manuscript be published in Philosophy and Technology. Availability of data and materials: Not Applicable. Competing interests: I confirm that there are no competing interests in this manuscript.

Funding:
The study was fully funded by the author, and there are no conflicts of interest to declare. Authors' contributions: I confirm that this paper was fully prepared by the author without input from any other person.