可能因为您的浏览器不支持样式,您可以更新您的浏览器到最新版本,以获取对此功能的支持,访问下面的网站,获取关于浏览器的信息:
褚华东,李园园,叶君惠,胡凤培,何铨,赵雷.(2019).个人-非个人道德困境下人对智能机器道德判断研究.应用心理学,25(3),262-271.
焦丽颖,李昌锦,陈圳,许恒彬,许燕.(2025).当 AI “具有” 人格:善恶人格角色对大语言模型道德判断的影响.心理学报,57(6),929.
许丽颖,喻丰,彭凯平.(2022).算法歧视比人类歧视引起更少道德惩罚欲.心理学报,54(9),1076.
许为,葛列众,高在峰.(2021).人-AI交互:实现 “以人为中心 AI” 理念的跨学科新领域.智能系统学报,16(4),605-621.
闫霄,莫田甜,周欣悦.(2024).中西方文化差异对虚拟人道德责任判断的影响.心理学报,56(2),161.
周尚君,伍茜.(2019).人工智能司法决策的可能与限度.华东政法大学学报,(1),53-66.
An,J.,Huang,D.,Lin,C.,& Tai,M.(2025).Measuring gender and racial biases in large language models:Intersectional evidence from automated resume evaluation.PNAS Nexus,4(3),pgaf089.
Awad,E.,Dsouza,S.,Kim,R.,Schulz,J.,Henrich,J.,Shariff,A.,...& Rahwan,I.(2018).The moral machine experiment.Nature,563(7729),59-64.
Bigman,Y.E.,Wilson,D.,Arnestad,M.N.,Waytz,A.,& Gray,K.(2023).Algorithmic discrimination causes less moral outrage than human discrimination.Journal of Experimental Psychology:General,152(1),4.
Bigman,Y.E.,Yam,K.C.,Marciano,D.,Reynolds,S.J.,& Gray,K.(2021).Threat of racial and economic inequality increases preference for algorithm decision-making.Computers in Human Behavior,122,106859.
Bonnefon,J.F.,Rahwan,I.,& Shariff,A.(2024).The moral psychology of Artificial Intelligence.Annual Review of Psychology,75(1),653-675.
Buolamwini,J.,& Gebru,T.(2018,January).Gender shades:Intersectional accuracy disparities in commercial gender classification.In Conference on Fairness,Accountability and Transparency(pp.77-91).PMLR.
Chu,Y.,& Liu,P.(2023).Machines and humans in sacrificial moral dilemmas:Required similarly but judged differently?Cognition,239,105575.
Dastin,J.(2022).Amazon scraps secret AI recruiting tool that showed bias against women.In Ethics of data and analytics(pp.296-299).Auerbach Publications.
De Visser,E.J.,Monfort,S.S.,McKendrick,R.,Smith,M.A.,McKnight,P.E.,Krueger,F.,& Parasuraman,R.(2016).Almost human:Anthropomorphism increases trust resilience in cognitive agents.Journal of Experimental Psychology:Applied,22(3),331.
Dillion,D.,Tandon,N.,Gu,Y.,& Gray,K.(2023).Can AI language models replace human participants?Trends in Cognitive Sciences,27(7),597-600.
Gabriel,I.(2020).Artificial intelligence,values,and alignment.Minds and Machines,30(3),411-437.
Gray,H.M.,Gray,K.,& Wegner,D.M.(2007).Dimensions of mind perception.Science,315(5812),619.
Gray,K.,Young,L.,& Waytz,A.(2012).Mind perception is the essence of morality.Psychological Inquiry,23(2),101-124.
Guo,S.,Mokhberian,N.,& Lerman,K.(2023,June).A data fusion framework for multi-domain morality learning.In Proceedings of the International AAAI Conference on Web and Social Media(Vol.17,pp.281-291).MIT Press.
Heersmink,R.,& Knight,S.(2018).Distributed learning:Educating and assessing extended cognitive systems.Philosophical Psychology,31(6),969-990.
Hidalgo,C.A.,Orghian,D.,Canals,J.A.,De Almeida,F.,& Martin,N.(2021).How humans judge machines.MIT Press.
Haidt,J.(2007).The new synthesis in moral psychology.Science,316(5827),998-1002.
Hristova,E.,& Grinberg,M.(2016).Should moral decisions be different for human and artificial cognitive agents?In Proceedings of the Annual Meeting of the Cognitive Science Society(Vol.38).MIT Press.
Jiang,H.,Zhang,X.,Cao,X.,Breazeal,C.,Roy,D.,& Kabbara,J.(2023).Personal LLM:Investigating the ability of large language models to express personality traits.arXiv preprint arXiv:2305.02547.
Jiang,L.,Hwang,J.D.,Bhagavatula,C.,Bras,R.L.,Liang,J.T.,Levine,S.,...& Choi,Y.(2025).Investigating machine moral judgement through the Delphi experiment.Nature Machine Intelligence,1-16.
Kahn Jr,P.H.,Kanda,T.,Ishiguro,H.,Gill,B.T.,Ruckert,J.H.,Shen,S.,...& Severson,R.L.(2012,March).Do people hold a humanoid robot morally accountable for the harm it causes?In Proceedings of the Seventh Annual ACM/IEEE International Conference on Human-Robot Interaction(pp.33-40).PMLR.
Kapania,S.,Siy,O.,Clapper,G.,Sp,A.M.,& Sambasivan,N.(2022,April).“ Because AI is 100% right and safe”:User attitudes and sources of AI authority in India.In Proceedings of the 2022 CHI Conference on Human Factors in Computing Systems(pp.1-18).MIT Press.
Kneer,M.,& Stuart,M.T.(2021,March).Playing the blame game with robots.In Companion of the 2021 ACM/IEEE International Conference on Human-Robot Interaction(pp.407-411).PMLR.
Komatsu,T.(2016,March).Japanese students apply same moral norms to humans and robot agents:Considering a moral HRI in terms of different cultural and academic backgrounds.In 2016 11th ACM/IEEE International Conference on Human-Robot Interaction(HRI)(pp.457-458).IEEE.
Komatsu,T.,Malle,B.F.,& Scheutz,M.(2021,March).Blaming the reluctant robot:Parallel blame judgments for robots in moral dilemmas across US and Japan.In Proceedings of the 2021 ACM/IEEE International Conference on Human-Robot Interaction(pp.63-72).IEEE.
Laakasuo,M.,Kunnari,A.,Francis,K.,Koová,M.J.,Kopecky,R.,Buttazzoni,P.,...Hannikainen,I.(2025).Moral psychological exploration of the asymmetry effect in AI-assisted euthanasia decisions.Cognition,262,106177.
Laakasuo,M.,Palom?ki,J.,Kunnari,A.,Rauhala,S.,Drosinou,M.,Halonen,J.,...Francis,K.B.(2023).Moral psychology of nursing robots:Exploring the role of robots in dilemmas of patient autonomy.European Journal of Social Psychology,53(1),108-128.
Ladak,A.,Loughnan,S.,& Wilks,M.(2024).The moral psychology of artificial intelligence.Current Directions in Psychological Science,33(1),27-34.
Ladak,A.,Wilks,M.,& Anthis,J.R.(2023).Extending perspective taking to nonhuman animals and artificial entities.Social Cognition,41(3),274-302.
Lima,G.,Grgic’-Hlacˇa,N.,& Cha,M.(2021,May).Human perceptions on moral responsibility of AI:A case study in AI-assisted bail decision-making.In Proceedings of the 2021 CHI Conference on Human Factors in Computing Systems(pp.1-17).IEEE.
Lima,G.,Grgic’-Hlacˇa,N.,& Cha,M.(2023,April).Blaming humans and machines:What shapes people’s reactions to algorithmic harm.In Proceedings of the 2023 CHI Conference on Human Factors in Computing Systems(pp.1-26).
Malle,B.F.,Guglielmo,S.,& Monroe,A.E.(2014).A theory of blame.Psychological Inquiry,25(2),147-186.IEEE.
Malle,B.F.,Magar,S.T.,& Scheutz,M.(2019).AI in the sky:How people morally evaluate human and machine decisions in a lethal strike dilemma.Robotics and Well-being,111-133.
Malle,B.F.,Scheutz,M.,Arnold,T.,Voiklis,J.,& Cusimano,C.(2015,March).Sacrifice one for the good of many?People apply different moral norms to human and robot agents.In Proceedings of the Tenth Annual ACM/IEEE International Conference on Human-Robot Interaction(pp.117-124).IEEE.
Malle,B.F.,Scheutz,M.,Cusimano,C.,Voiklis,J.,Komatsu,T.,Thapa,S.,& Aladia,S.(2025).People’s judgments of humans and robots in a classic moral dilemma.Cognition,254,105958.
Matthias,A.(2004).The responsibility gap:Ascribing responsibility for the actions of learning automata.Ethics and Information Technology,6,175-183.
Manoli,A.,Pauketat,J.V.,& Anthis,J.R.(2025).The AI double standard:Humans judge all AIs for the actions of one.Proceedings of the ACM on Human-Computer Interaction,9(2),1-24.
Nazer,L.H.,Zatarah,R.,Waldrip,S.,Ke,J.X.C.,Moukheiber,M.,Khanna,A.K.,...Mathur,P.(2023).Bias in artificial intelligence algorithms and recommendations for mitigation.PLOS Digital Health,2(6),e0000278.
Nijssen,S.R.,Müller,B.C.,Bosse,T.,& Paulus,M.(2023).Can you count on a calculator?The role of agency and affect in judgments of robots as moral agents.Human-Computer Interaction,38(5-6),400-416.
Obermeyer,Z.,Powers,B.,Vogeli,C.,& Mullainathan,S.(2019).Dissecting racial bias in an algorithm used to manage the health of populations.Science,366(6464),447-453.
Pammer,K.,Gauld,C.,McKerral,A.,& Reeves,C.(2021).“They have to be better than human drivers!” Motorcyclists’ and cyclists’ perceptions of autonomous vehicles.Transportation Research Part F:Traffic Psychology and Behaviour,78,246-258.
Pan,K.,& Zeng,Y.(2023).Do llms possess a personality?making the mbti test an amazing evaluation for large language models.arXiv preprint arXiv:2307.16180.
Ryoo,Y.,Jeon,Y.A.,& Kim,W.(2024).The blame shift:Robot service failures hold service firms more accountable.Journal of Business Research,171,114360.
Schramowski,P.,Turan,C.,Andersen,N.,Rothkopf,C.A.,& Kersting,K.(2022).Large pre-trained language models contain human-like biases of what is right and wrong to do.Nature Machine Intelligence,4(3),258-268.
Shah,S.S.(2024).Gender Bias in Artificial Intelligence:Empowering Women Through Digital Literacy.Journal of Artificial Intelligence,1,1000088.
Shank,D.B.,& DeSanti,A.(2018).Attributions of morality and mind to artificial intelligence after real-world moral violations.Computers in Human Behavior,86,401-411.
Stuart,M.T.,& Kneer,M.(2021).Guilty artificial minds:Folk attributions of mens rea and culpability to artificially intelligent agents.Proceedings of the ACM on Human-Computer Interaction,5(CSCW2),1-27.
Tassy,S.,Oullier,O.,Mancini,J.,& Wicker,B.(2013).Discrepancies between judgment and choice of action in moral dilemmas.Frontiers in Psychology,4,250.
Waytz,A.,Heafner,J.,& Epley,N.(2014).The mind in the machine:Anthropomorphism increases trust in an autonomous vehicle.Journal of Experimental Social Psychology,52,113-117.
Xu,W.(2019).Toward human-centered AI:a perspective from human-computer interaction.Interactions,26(4),42-46.
Yam,K.C.,Goh,E.Y.,Fehr,R.,Lee,R.,Soh,H.,& Gray,K.(2022).When your boss is a robot:Workers are more spiteful to robot supervisors that seem more human.Journal of Experimental Social Psychology,102,104360.