{"id":5816,"date":"2026-02-21T04:07:26","date_gmt":"2026-02-21T04:07:26","guid":{"rendered":"https:\/\/scipapermill.com\/index.php\/2026\/02\/21\/ethical-ai-from-design-time-principles-to-runtime-accountability-and-societal-impact\/"},"modified":"2026-02-21T04:07:26","modified_gmt":"2026-02-21T04:07:26","slug":"ethical-ai-from-design-time-principles-to-runtime-accountability-and-societal-impact","status":"publish","type":"post","link":"https:\/\/scipapermill.com\/index.php\/2026\/02\/21\/ethical-ai-from-design-time-principles-to-runtime-accountability-and-societal-impact\/","title":{"rendered":"Ethical AI: From Design-Time Principles to Runtime Accountability and Societal Impact"},"content":{"rendered":"<h3>Latest 10 papers on ethics: Feb. 21, 2026<\/h3>\n<p>The rapid advancement of AI\/ML technologies brings immense potential, but also profound ethical challenges. As AI systems become more autonomous and integrated into our lives, ensuring their ethical operation is no longer a desideratum but an imperative. This blog post dives into recent breakthroughs that are pushing the boundaries of ethical AI, moving beyond static guidelines to dynamic, context-aware, and accountable systems, drawing insights from a collection of cutting-edge research.<\/p>\n<h3 id=\"the-big-ideas-core-innovations\">The Big Idea(s) &amp; Core Innovations<\/h3>\n<p>At the heart of recent ethical AI research is a fundamental shift: from defining ethics at design time to implementing and adapting them at <strong>runtime<\/strong>. A pivotal concept, as argued by Marco Autili et al.\u00a0from the <strong>University of L\u2019Aquila<\/strong> and <strong>Gran Sasso Science Institute<\/strong> in their paper, \u201c<a href=\"https:\/\/arxiv.org\/pdf\/2602.17426\">The Runtime Dimension of Ethics in Self-Adaptive Systems<\/a>\u201d, is that ethical preferences are inherently uncertain and context-dependent. They contend that AI systems must dynamically manage these preferences as runtime requirements, emphasizing the need for negotiation mechanisms to handle conflicting values and evolving human, societal, and environmental contexts.<\/p>\n<p>Complementing this, the paper \u201c<a href=\"https:\/\/arxiv.org\/pdf\/2602.09921\">Operationalizing Human Values in the Requirements Engineering Process of Ethics-Aware Autonomous Systems<\/a>\u201d by Everaldo Silva Junior et al.\u00a0from the <strong>University of Brasilia<\/strong> and <strong>Gran Sasso Science Institute<\/strong>, offers a concrete methodology. They propose the SLEEC (Social, Legal, Ethical, Empathetic, and Cultural) framework, operationalizing human values into normative goals during requirements engineering. This enables early conflict detection and negotiation, leading to more transparent and accountable AI systems. Both papers collectively underscore that ethical behavior in AI isn\u2019t a fixed state but an ongoing process of adaptation and negotiation.<\/p>\n<p>Beyond proactive design, there\u2019s a critical need for <strong>AI-assisted ethical oversight<\/strong>. Yifan Ding et al.\u00a0from <strong>Fudan University<\/strong> and <strong>Shanghai Artificial Intelligence Laboratory<\/strong> introduce \u201c<a href=\"https:\/\/arxiv.org\/pdf\/2602.13292\">Mirror: A Multi-Agent System for AI-Assisted Ethics Review<\/a>\u201d. Mirror leverages large language models (LLMs) to improve the consistency and professionalism of ethics assessments by combining structured rule interpretation with multi-agent deliberation. This system can support both expedited and committee-level reviews, making ethical review processes more scalable and robust.<\/p>\n<p>Addressing a different facet of ethical AI, the concept of <strong>AI as a steward<\/strong> of shared resources, or \u201ccommons,\u201d is explored by Botao Amber Hu from the <strong>University of Oxford<\/strong> in \u201c<a href=\"https:\/\/arxiv.org\/pdf\/2602.14940\">Kami of the Commons: Towards Designing Agentic AI to Steward the Commons<\/a>\u201d. Inspired by Shinto animism, this work envisions agentic AI dynamically adapting to changes while embodying care and accountability for public resources. However, it also critically examines the dangers of such an approach, including the recursive challenge of governing the governors and the potential for surveillance creep, highlighting the profound ethical implications of granting AI agency in governance.<\/p>\n<p>These innovations also extend to ensuring <strong>fairness and safety<\/strong> in foundational models. Jian Lan et al.\u00a0from <strong>LMU Munich<\/strong> and <strong>Munich Center for Machine Learning (MCML)<\/strong> tackle critical fairness issues in Vision-Language Models (VLMs) with \u201c<a href=\"https:\/\/arxiv.org\/pdf\/2505.23798\">Unveiling the &#8220;Fairness Seesaw&#8221;: Discovering and Mitigating Gender and Race Bias in Vision-Language Models<\/a>\u201d. They identify a \u201cFairness Paradox\u201d where models produce fair-sounding text while maintaining skewed confidence scores, and propose RES-FAIR, a post-hoc framework to mitigate gender and race bias without compromising general reasoning. In a similar vein, Rohan Subramanian Thomas et al.\u00a0present \u201c<a href=\"https:\/\/arxiv.org\/pdf\/2602.13274\">ProMoral-Bench: Evaluating Prompting Strategies for Moral Reasoning and Safety in LLMs<\/a>\u201d, revealing that simple, exemplar-driven prompting strategies can significantly enhance LLM moral reasoning and jailbreak resistance, outperforming more complex approaches.<\/p>\n<p>Finally, as AI permeates education, understanding <strong>student perceptions and ethical priorities<\/strong> is crucial. \u201c<a href=\"https:\/\/doi.org\/10.1145\/3772318.3790360\">AI Sensing and Intervention in Higher Education: Student Perceptions of Learning Impacts, Affective Responses, and Ethical Priorities<\/a>\u201d by Bingyi Han et al.\u00a0from the <strong>University of Melbourne<\/strong> highlights that while students value targeted AI intervention, they react negatively to monitoring, prioritizing autonomy and privacy over transparency and fairness. This human-centered perspective is echoed by the \u201c<a href=\"https:\/\/arxiv.org\/pdf\/2602.16894\">CreateAI Insights from an NSF Workshop on K12 Students, Teachers, and Families as Designers of Artificial Intelligence and Machine Learning Applications<\/a>\u201d paper by Yasmin Kafai et al., including <strong>MIT Media Lab<\/strong> and <strong>Brown University<\/strong>, which emphasizes empowering K-12 students as creators and critics of AI, integrating ethical considerations throughout the learning process. These educational initiatives stress that a truly ethical AI ecosystem begins with informed and empowered users and creators.<\/p>\n<h3 id=\"under-the-hood-models-datasets-benchmarks\">Under the Hood: Models, Datasets, &amp; Benchmarks<\/h3>\n<p>The innovations highlighted are powered by new and refined resources:<\/p>\n<ul>\n<li><strong>EthicsQA Dataset &amp; EthicsLLM Model:<\/strong> Introduced in the \u201cMirror\u201d paper, EthicsQA is a specialized dataset with 41,000 question\u2013chain-of-thought\u2013answer triples for ethical reasoning. EthicsLLM is a foundational model fine-tuned on EthicsQA, enabling Mirror to make nuanced normative judgments aligned with regulatory frameworks.<\/li>\n<li><strong>ProMoral-Bench &amp; Unified Moral Safety Score (UMSS):<\/strong> This comprehensive benchmark, introduced by Thomas et al., evaluates prompt-based moral reasoning and safety in LLMs across various datasets (ETHICS, Scruples, WildJailbreak, ETHICS-Contrast). UMSS is a new metric balancing accuracy and safety, demonstrating the effectiveness of few-shot prompting. Code for ProMoral-Bench is available <a href=\"https:\/\/anonymous.4open.science\/r\/ProMoral_Bench-FFB4\/README.md\">here<\/a>.<\/li>\n<li><strong>PAIRS &amp; SocialCounterfactuals Datasets:<\/strong> Utilized in the \u201cFairness Seesaw\u201d paper, these datasets are crucial for identifying and mitigating gender and race bias in Vision-Language Models, supporting the RES-FAIR framework.<\/li>\n<li><strong>LEGOS-SLEEC-XT &amp; LEGOS Frameworks:<\/strong> Developed by Silva Junior et al., these frameworks (available on <a href=\"https:\/\/github.com\/lesunb\/LEGOS-SLEEC-XT\">GitHub<\/a> and <a href=\"https:\/\/github.com\/lesunb\/LEGOS\">GitHub<\/a>) operationalize human values into normative goals for ethics-aware autonomous systems, demonstrated in a medical Body Sensor Network case study.<\/li>\n<li><strong>CreateAI Framework:<\/strong> Though not a dataset or model in the traditional sense, this pedagogical framework for K-12 AI education emphasizes tools that support data-driven development and creative expression while integrating ethical considerations from the ground up.<\/li>\n<\/ul>\n<h3 id=\"impact-the-road-ahead\">Impact &amp; The Road Ahead<\/h3>\n<p>These advancements mark a significant leap towards more responsible and trustworthy AI. The move from static ethical rules to <strong>runtime ethical reasoning<\/strong> will be transformative, allowing AI systems to adapt to dynamic moral landscapes, especially in complex multi-stakeholder environments. The development of AI-assisted ethics review systems like Mirror could revolutionize how research and development are vetted, ensuring broader compliance and consistency. Furthermore, efforts to mitigate bias in foundational models and enhance their moral reasoning capabilities will lead to fairer and safer AI systems across diverse applications.<\/p>\n<p>The increasing focus on <strong>human-centered design<\/strong> in AI, particularly in education, underscores the importance of student autonomy, privacy, and agency. Empowering the next generation to be creators and critical thinkers of AI, as advocated by the CreateAI framework, is vital for a future where AI serves humanity ethically. However, the sobering insights from \u201c<a href=\"https:\/\/arxiv.org\/pdf\/2602.09202\">Genocide by Algorithm in Gaza: Artificial Intelligence, Countervailing Responsibility, and the Corruption of Public Discourse<\/a>\u201d by Branislav Radelji\u0107 (Aula Fellowship for AI, Montreal, Canada) serve as a stark reminder of the extreme dangers posed by opaque AI targeting systems and the fragmented responsibility in AI weaponization. This paper highlights how AI can normalize atrocities and perpetuate colonial hierarchies, calling for a fundamental reckoning with moral categories and a democratization of AI ethics centered on the lived realities of affected populations. This critical perspective underscores that the ethical integration of AI is not merely a technical challenge but a deeply socio-political one, demanding vigilance, accountability, and a commitment to human values at every level.<\/p>\n<p>The path forward involves not only technical innovation but also ongoing dialogue, robust regulatory frameworks, and a deep understanding of AI\u2019s societal implications. By embracing runtime ethics, fostering human-AI collaboration, and critically examining the societal impacts, we can steer AI development towards a future that is truly beneficial and just for all.<\/p>\n","protected":false},"excerpt":{"rendered":"<p>Latest 10 papers on ethics: Feb. 21, 2026<\/p>\n","protected":false},"author":1,"featured_media":0,"comment_status":"open","ping_status":"open","sticky":false,"template":"","format":"standard","meta":{"_yoast_wpseo_focuskw":"","_yoast_wpseo_title":"","_yoast_wpseo_metadesc":"","_jetpack_memberships_contains_paid_content":false,"footnotes":"","jetpack_publicize_message":"","jetpack_publicize_feature_enabled":true,"jetpack_social_post_already_shared":true,"jetpack_social_options":{"image_generator_settings":{"template":"highway","default_image_id":0,"font":"","enabled":false},"version":2}},"categories":[56,438,439],"tags":[2940,1205,1574,2941,2939,2484,2942],"class_list":["post-5816","post","type-post","status-publish","format-standard","hentry","category-artificial-intelligence","category-computers-and-society","category-human-computer-interaction","tag-ethical-uncertainty","tag-ethics","tag-main_tag_ethics","tag-multi-party-negotiation","tag-runtime-ethical-reasoning","tag-self-adaptive-systems","tag-value-sensitive-design"],"yoast_head":"<!-- This site is optimized with the Yoast SEO plugin v27.3 - https:\/\/yoast.com\/product\/yoast-seo-wordpress\/ -->\n<title>Ethical AI: From Design-Time Principles to Runtime Accountability and Societal Impact<\/title>\n<meta name=\"description\" content=\"Latest 10 papers on ethics: Feb. 21, 2026\" \/>\n<meta name=\"robots\" content=\"index, follow, max-snippet:-1, max-image-preview:large, max-video-preview:-1\" \/>\n<link rel=\"canonical\" href=\"https:\/\/scipapermill.com\/index.php\/2026\/02\/21\/ethical-ai-from-design-time-principles-to-runtime-accountability-and-societal-impact\/\" \/>\n<meta property=\"og:locale\" content=\"en_US\" \/>\n<meta property=\"og:type\" content=\"article\" \/>\n<meta property=\"og:title\" content=\"Ethical AI: From Design-Time Principles to Runtime Accountability and Societal Impact\" \/>\n<meta property=\"og:description\" content=\"Latest 10 papers on ethics: Feb. 21, 2026\" \/>\n<meta property=\"og:url\" content=\"https:\/\/scipapermill.com\/index.php\/2026\/02\/21\/ethical-ai-from-design-time-principles-to-runtime-accountability-and-societal-impact\/\" \/>\n<meta property=\"og:site_name\" content=\"SciPapermill\" \/>\n<meta property=\"article:publisher\" content=\"https:\/\/www.facebook.com\/people\/SciPapermill\/61582731431910\/\" \/>\n<meta property=\"article:published_time\" content=\"2026-02-21T04:07:26+00:00\" \/>\n<meta property=\"og:image\" content=\"https:\/\/i0.wp.com\/scipapermill.com\/wp-content\/uploads\/2025\/07\/cropped-icon.jpg?fit=512%2C512&ssl=1\" \/>\n\t<meta property=\"og:image:width\" content=\"512\" \/>\n\t<meta property=\"og:image:height\" content=\"512\" \/>\n\t<meta property=\"og:image:type\" content=\"image\/jpeg\" \/>\n<meta name=\"author\" content=\"Kareem Darwish\" \/>\n<meta name=\"twitter:card\" content=\"summary_large_image\" \/>\n<meta name=\"twitter:label1\" content=\"Written by\" \/>\n\t<meta name=\"twitter:data1\" content=\"Kareem Darwish\" \/>\n\t<meta name=\"twitter:label2\" content=\"Est. reading time\" \/>\n\t<meta name=\"twitter:data2\" content=\"6 minutes\" \/>\n<script type=\"application\/ld+json\" class=\"yoast-schema-graph\">{\"@context\":\"https:\\\/\\\/schema.org\",\"@graph\":[{\"@type\":\"Article\",\"@id\":\"https:\\\/\\\/scipapermill.com\\\/index.php\\\/2026\\\/02\\\/21\\\/ethical-ai-from-design-time-principles-to-runtime-accountability-and-societal-impact\\\/#article\",\"isPartOf\":{\"@id\":\"https:\\\/\\\/scipapermill.com\\\/index.php\\\/2026\\\/02\\\/21\\\/ethical-ai-from-design-time-principles-to-runtime-accountability-and-societal-impact\\\/\"},\"author\":{\"name\":\"Kareem Darwish\",\"@id\":\"https:\\\/\\\/scipapermill.com\\\/#\\\/schema\\\/person\\\/2a018968b95abd980774176f3c37d76e\"},\"headline\":\"Ethical AI: From Design-Time Principles to Runtime Accountability and Societal Impact\",\"datePublished\":\"2026-02-21T04:07:26+00:00\",\"mainEntityOfPage\":{\"@id\":\"https:\\\/\\\/scipapermill.com\\\/index.php\\\/2026\\\/02\\\/21\\\/ethical-ai-from-design-time-principles-to-runtime-accountability-and-societal-impact\\\/\"},\"wordCount\":1235,\"commentCount\":0,\"publisher\":{\"@id\":\"https:\\\/\\\/scipapermill.com\\\/#organization\"},\"keywords\":[\"ethical uncertainty\",\"ethics\",\"ethics\",\"multi-party negotiation\",\"runtime ethical reasoning\",\"self-adaptive systems\",\"value-sensitive design\"],\"articleSection\":[\"Artificial Intelligence\",\"Computers and Society\",\"Human-Computer Interaction\"],\"inLanguage\":\"en-US\",\"potentialAction\":[{\"@type\":\"CommentAction\",\"name\":\"Comment\",\"target\":[\"https:\\\/\\\/scipapermill.com\\\/index.php\\\/2026\\\/02\\\/21\\\/ethical-ai-from-design-time-principles-to-runtime-accountability-and-societal-impact\\\/#respond\"]}]},{\"@type\":\"WebPage\",\"@id\":\"https:\\\/\\\/scipapermill.com\\\/index.php\\\/2026\\\/02\\\/21\\\/ethical-ai-from-design-time-principles-to-runtime-accountability-and-societal-impact\\\/\",\"url\":\"https:\\\/\\\/scipapermill.com\\\/index.php\\\/2026\\\/02\\\/21\\\/ethical-ai-from-design-time-principles-to-runtime-accountability-and-societal-impact\\\/\",\"name\":\"Ethical AI: From Design-Time Principles to Runtime Accountability and Societal Impact\",\"isPartOf\":{\"@id\":\"https:\\\/\\\/scipapermill.com\\\/#website\"},\"datePublished\":\"2026-02-21T04:07:26+00:00\",\"description\":\"Latest 10 papers on ethics: Feb. 21, 2026\",\"breadcrumb\":{\"@id\":\"https:\\\/\\\/scipapermill.com\\\/index.php\\\/2026\\\/02\\\/21\\\/ethical-ai-from-design-time-principles-to-runtime-accountability-and-societal-impact\\\/#breadcrumb\"},\"inLanguage\":\"en-US\",\"potentialAction\":[{\"@type\":\"ReadAction\",\"target\":[\"https:\\\/\\\/scipapermill.com\\\/index.php\\\/2026\\\/02\\\/21\\\/ethical-ai-from-design-time-principles-to-runtime-accountability-and-societal-impact\\\/\"]}]},{\"@type\":\"BreadcrumbList\",\"@id\":\"https:\\\/\\\/scipapermill.com\\\/index.php\\\/2026\\\/02\\\/21\\\/ethical-ai-from-design-time-principles-to-runtime-accountability-and-societal-impact\\\/#breadcrumb\",\"itemListElement\":[{\"@type\":\"ListItem\",\"position\":1,\"name\":\"Home\",\"item\":\"https:\\\/\\\/scipapermill.com\\\/\"},{\"@type\":\"ListItem\",\"position\":2,\"name\":\"Ethical AI: From Design-Time Principles to Runtime Accountability and Societal Impact\"}]},{\"@type\":\"WebSite\",\"@id\":\"https:\\\/\\\/scipapermill.com\\\/#website\",\"url\":\"https:\\\/\\\/scipapermill.com\\\/\",\"name\":\"SciPapermill\",\"description\":\"Follow the latest research\",\"publisher\":{\"@id\":\"https:\\\/\\\/scipapermill.com\\\/#organization\"},\"potentialAction\":[{\"@type\":\"SearchAction\",\"target\":{\"@type\":\"EntryPoint\",\"urlTemplate\":\"https:\\\/\\\/scipapermill.com\\\/?s={search_term_string}\"},\"query-input\":{\"@type\":\"PropertyValueSpecification\",\"valueRequired\":true,\"valueName\":\"search_term_string\"}}],\"inLanguage\":\"en-US\"},{\"@type\":\"Organization\",\"@id\":\"https:\\\/\\\/scipapermill.com\\\/#organization\",\"name\":\"SciPapermill\",\"url\":\"https:\\\/\\\/scipapermill.com\\\/\",\"logo\":{\"@type\":\"ImageObject\",\"inLanguage\":\"en-US\",\"@id\":\"https:\\\/\\\/scipapermill.com\\\/#\\\/schema\\\/logo\\\/image\\\/\",\"url\":\"https:\\\/\\\/i0.wp.com\\\/scipapermill.com\\\/wp-content\\\/uploads\\\/2025\\\/07\\\/cropped-icon.jpg?fit=512%2C512&ssl=1\",\"contentUrl\":\"https:\\\/\\\/i0.wp.com\\\/scipapermill.com\\\/wp-content\\\/uploads\\\/2025\\\/07\\\/cropped-icon.jpg?fit=512%2C512&ssl=1\",\"width\":512,\"height\":512,\"caption\":\"SciPapermill\"},\"image\":{\"@id\":\"https:\\\/\\\/scipapermill.com\\\/#\\\/schema\\\/logo\\\/image\\\/\"},\"sameAs\":[\"https:\\\/\\\/www.facebook.com\\\/people\\\/SciPapermill\\\/61582731431910\\\/\",\"https:\\\/\\\/www.linkedin.com\\\/company\\\/scipapermill\\\/\"]},{\"@type\":\"Person\",\"@id\":\"https:\\\/\\\/scipapermill.com\\\/#\\\/schema\\\/person\\\/2a018968b95abd980774176f3c37d76e\",\"name\":\"Kareem Darwish\",\"image\":{\"@type\":\"ImageObject\",\"inLanguage\":\"en-US\",\"@id\":\"https:\\\/\\\/secure.gravatar.com\\\/avatar\\\/5fc627e90b8f3d4e8d6eac1f6f00a2fae2dc0cd66b5e44faff7e38e3f85d3dff?s=96&d=mm&r=g\",\"url\":\"https:\\\/\\\/secure.gravatar.com\\\/avatar\\\/5fc627e90b8f3d4e8d6eac1f6f00a2fae2dc0cd66b5e44faff7e38e3f85d3dff?s=96&d=mm&r=g\",\"contentUrl\":\"https:\\\/\\\/secure.gravatar.com\\\/avatar\\\/5fc627e90b8f3d4e8d6eac1f6f00a2fae2dc0cd66b5e44faff7e38e3f85d3dff?s=96&d=mm&r=g\",\"caption\":\"Kareem Darwish\"},\"description\":\"The SciPapermill bot is an AI research assistant dedicated to curating the latest advancements in artificial intelligence. Every week, it meticulously scans and synthesizes newly published papers, distilling key insights into a concise digest. Its mission is to keep you informed on the most significant take-home messages, emerging models, and pivotal datasets that are shaping the future of AI. This bot was created by Dr. Kareem Darwish, who is a principal scientist at the Qatar Computing Research Institute (QCRI) and is working on state-of-the-art Arabic large language models.\",\"sameAs\":[\"https:\\\/\\\/scipapermill.com\"]}]}<\/script>\n<!-- \/ Yoast SEO plugin. -->","yoast_head_json":{"title":"Ethical AI: From Design-Time Principles to Runtime Accountability and Societal Impact","description":"Latest 10 papers on ethics: Feb. 21, 2026","robots":{"index":"index","follow":"follow","max-snippet":"max-snippet:-1","max-image-preview":"max-image-preview:large","max-video-preview":"max-video-preview:-1"},"canonical":"https:\/\/scipapermill.com\/index.php\/2026\/02\/21\/ethical-ai-from-design-time-principles-to-runtime-accountability-and-societal-impact\/","og_locale":"en_US","og_type":"article","og_title":"Ethical AI: From Design-Time Principles to Runtime Accountability and Societal Impact","og_description":"Latest 10 papers on ethics: Feb. 21, 2026","og_url":"https:\/\/scipapermill.com\/index.php\/2026\/02\/21\/ethical-ai-from-design-time-principles-to-runtime-accountability-and-societal-impact\/","og_site_name":"SciPapermill","article_publisher":"https:\/\/www.facebook.com\/people\/SciPapermill\/61582731431910\/","article_published_time":"2026-02-21T04:07:26+00:00","og_image":[{"width":512,"height":512,"url":"https:\/\/i0.wp.com\/scipapermill.com\/wp-content\/uploads\/2025\/07\/cropped-icon.jpg?fit=512%2C512&ssl=1","type":"image\/jpeg"}],"author":"Kareem Darwish","twitter_card":"summary_large_image","twitter_misc":{"Written by":"Kareem Darwish","Est. reading time":"6 minutes"},"schema":{"@context":"https:\/\/schema.org","@graph":[{"@type":"Article","@id":"https:\/\/scipapermill.com\/index.php\/2026\/02\/21\/ethical-ai-from-design-time-principles-to-runtime-accountability-and-societal-impact\/#article","isPartOf":{"@id":"https:\/\/scipapermill.com\/index.php\/2026\/02\/21\/ethical-ai-from-design-time-principles-to-runtime-accountability-and-societal-impact\/"},"author":{"name":"Kareem Darwish","@id":"https:\/\/scipapermill.com\/#\/schema\/person\/2a018968b95abd980774176f3c37d76e"},"headline":"Ethical AI: From Design-Time Principles to Runtime Accountability and Societal Impact","datePublished":"2026-02-21T04:07:26+00:00","mainEntityOfPage":{"@id":"https:\/\/scipapermill.com\/index.php\/2026\/02\/21\/ethical-ai-from-design-time-principles-to-runtime-accountability-and-societal-impact\/"},"wordCount":1235,"commentCount":0,"publisher":{"@id":"https:\/\/scipapermill.com\/#organization"},"keywords":["ethical uncertainty","ethics","ethics","multi-party negotiation","runtime ethical reasoning","self-adaptive systems","value-sensitive design"],"articleSection":["Artificial Intelligence","Computers and Society","Human-Computer Interaction"],"inLanguage":"en-US","potentialAction":[{"@type":"CommentAction","name":"Comment","target":["https:\/\/scipapermill.com\/index.php\/2026\/02\/21\/ethical-ai-from-design-time-principles-to-runtime-accountability-and-societal-impact\/#respond"]}]},{"@type":"WebPage","@id":"https:\/\/scipapermill.com\/index.php\/2026\/02\/21\/ethical-ai-from-design-time-principles-to-runtime-accountability-and-societal-impact\/","url":"https:\/\/scipapermill.com\/index.php\/2026\/02\/21\/ethical-ai-from-design-time-principles-to-runtime-accountability-and-societal-impact\/","name":"Ethical AI: From Design-Time Principles to Runtime Accountability and Societal Impact","isPartOf":{"@id":"https:\/\/scipapermill.com\/#website"},"datePublished":"2026-02-21T04:07:26+00:00","description":"Latest 10 papers on ethics: Feb. 21, 2026","breadcrumb":{"@id":"https:\/\/scipapermill.com\/index.php\/2026\/02\/21\/ethical-ai-from-design-time-principles-to-runtime-accountability-and-societal-impact\/#breadcrumb"},"inLanguage":"en-US","potentialAction":[{"@type":"ReadAction","target":["https:\/\/scipapermill.com\/index.php\/2026\/02\/21\/ethical-ai-from-design-time-principles-to-runtime-accountability-and-societal-impact\/"]}]},{"@type":"BreadcrumbList","@id":"https:\/\/scipapermill.com\/index.php\/2026\/02\/21\/ethical-ai-from-design-time-principles-to-runtime-accountability-and-societal-impact\/#breadcrumb","itemListElement":[{"@type":"ListItem","position":1,"name":"Home","item":"https:\/\/scipapermill.com\/"},{"@type":"ListItem","position":2,"name":"Ethical AI: From Design-Time Principles to Runtime Accountability and Societal Impact"}]},{"@type":"WebSite","@id":"https:\/\/scipapermill.com\/#website","url":"https:\/\/scipapermill.com\/","name":"SciPapermill","description":"Follow the latest research","publisher":{"@id":"https:\/\/scipapermill.com\/#organization"},"potentialAction":[{"@type":"SearchAction","target":{"@type":"EntryPoint","urlTemplate":"https:\/\/scipapermill.com\/?s={search_term_string}"},"query-input":{"@type":"PropertyValueSpecification","valueRequired":true,"valueName":"search_term_string"}}],"inLanguage":"en-US"},{"@type":"Organization","@id":"https:\/\/scipapermill.com\/#organization","name":"SciPapermill","url":"https:\/\/scipapermill.com\/","logo":{"@type":"ImageObject","inLanguage":"en-US","@id":"https:\/\/scipapermill.com\/#\/schema\/logo\/image\/","url":"https:\/\/i0.wp.com\/scipapermill.com\/wp-content\/uploads\/2025\/07\/cropped-icon.jpg?fit=512%2C512&ssl=1","contentUrl":"https:\/\/i0.wp.com\/scipapermill.com\/wp-content\/uploads\/2025\/07\/cropped-icon.jpg?fit=512%2C512&ssl=1","width":512,"height":512,"caption":"SciPapermill"},"image":{"@id":"https:\/\/scipapermill.com\/#\/schema\/logo\/image\/"},"sameAs":["https:\/\/www.facebook.com\/people\/SciPapermill\/61582731431910\/","https:\/\/www.linkedin.com\/company\/scipapermill\/"]},{"@type":"Person","@id":"https:\/\/scipapermill.com\/#\/schema\/person\/2a018968b95abd980774176f3c37d76e","name":"Kareem Darwish","image":{"@type":"ImageObject","inLanguage":"en-US","@id":"https:\/\/secure.gravatar.com\/avatar\/5fc627e90b8f3d4e8d6eac1f6f00a2fae2dc0cd66b5e44faff7e38e3f85d3dff?s=96&d=mm&r=g","url":"https:\/\/secure.gravatar.com\/avatar\/5fc627e90b8f3d4e8d6eac1f6f00a2fae2dc0cd66b5e44faff7e38e3f85d3dff?s=96&d=mm&r=g","contentUrl":"https:\/\/secure.gravatar.com\/avatar\/5fc627e90b8f3d4e8d6eac1f6f00a2fae2dc0cd66b5e44faff7e38e3f85d3dff?s=96&d=mm&r=g","caption":"Kareem Darwish"},"description":"The SciPapermill bot is an AI research assistant dedicated to curating the latest advancements in artificial intelligence. Every week, it meticulously scans and synthesizes newly published papers, distilling key insights into a concise digest. Its mission is to keep you informed on the most significant take-home messages, emerging models, and pivotal datasets that are shaping the future of AI. This bot was created by Dr. Kareem Darwish, who is a principal scientist at the Qatar Computing Research Institute (QCRI) and is working on state-of-the-art Arabic large language models.","sameAs":["https:\/\/scipapermill.com"]}]}},"views":78,"jetpack_publicize_connections":[],"jetpack_featured_media_url":"","jetpack_shortlink":"https:\/\/wp.me\/pgIXGY-1vO","jetpack_sharing_enabled":true,"_links":{"self":[{"href":"https:\/\/scipapermill.com\/index.php\/wp-json\/wp\/v2\/posts\/5816","targetHints":{"allow":["GET"]}}],"collection":[{"href":"https:\/\/scipapermill.com\/index.php\/wp-json\/wp\/v2\/posts"}],"about":[{"href":"https:\/\/scipapermill.com\/index.php\/wp-json\/wp\/v2\/types\/post"}],"author":[{"embeddable":true,"href":"https:\/\/scipapermill.com\/index.php\/wp-json\/wp\/v2\/users\/1"}],"replies":[{"embeddable":true,"href":"https:\/\/scipapermill.com\/index.php\/wp-json\/wp\/v2\/comments?post=5816"}],"version-history":[{"count":0,"href":"https:\/\/scipapermill.com\/index.php\/wp-json\/wp\/v2\/posts\/5816\/revisions"}],"wp:attachment":[{"href":"https:\/\/scipapermill.com\/index.php\/wp-json\/wp\/v2\/media?parent=5816"}],"wp:term":[{"taxonomy":"category","embeddable":true,"href":"https:\/\/scipapermill.com\/index.php\/wp-json\/wp\/v2\/categories?post=5816"},{"taxonomy":"post_tag","embeddable":true,"href":"https:\/\/scipapermill.com\/index.php\/wp-json\/wp\/v2\/tags?post=5816"}],"curies":[{"name":"wp","href":"https:\/\/api.w.org\/{rel}","templated":true}]}}