Deepening Conversations on Ethical Concerns
One of the research teams taking part in the pilot was led by Sarah Billington, professor of civil engineering and a senior fellow at the Stanford Woods Institute for the Environment. Her research proposal focused on human-centered building design for the future that uses data collection and machine learning to sense and respond to the well-being of occupants within a structure, including adapting the building systems and digital environments to support and enhance occupant experiences.
“A major part of our research agenda centers on developing a platform that respects and preserves occupant privacy, and this is what the ESR board picked up on,” Billington says. “The review team had some significant concerns along those lines, wondering if the eventual platform could be used by various actors in a ‘Big Brother’ way to monitor employees through their personal information, or to focus solely on productivity.”
The ESR team also expressed concern that the building’s sensor systems could be misused by future employers in the building, a scenario the team hadn’t thoroughly considered, Billington says. Her team iterated with the reviewers to clarify the project’s intent and its privacy safeguards and is now considering new ways of aggregating and anonymizing the data. They’ve also expanded their research around privacy, exploring how individuals do — and do not — want their data used and how comfortable they are sharing that information.
“The ESR deepened the conversations around privacy that we were already having,” Billington says. “The other thing they suggested — which I hadn’t really thought about — was for us to become advocates for ethics in our project. So the next time I gave a talk, for example, I used that platform to emphasize the need for privacy-preserving research in our work. I hadn’t thought so much before how as a researcher I can elevate these conversations around ethics with my voice.”
New Strategies Lead to New Designs
Surveys conducted following the pilot showed that of those responding, 67 percent of those who iterated with the ESR — and more than half of all researchers — felt the ESR process had influenced their research design. Moreover, reviewers identified ethical and societal risks not mentioned by researchers in 80 percent of the projects that iterated with the ESR. Every survey respondent was willing to engage in the ESR process again.
Those same surveys showed that researchers — some of whom had little experience thinking deeply about ethical issues — appreciated the structure of the ESR process and requested even more guidance in future reviews.
“The scaffolding was the biggest benefit researchers reported from undergoing the ESR process,” said Bernstein. “They said we forced them to stop and consider these issues and gave them some strategies for how to think about it that they didn’t necessarily have before.”
Raising Questions for Better Scientific Discovery
The ESR program aims to continue growing the program — initially to other parts of the Stanford campus beyond AI, where it could help guide researchers working in areas such as sustainability or electoral research. The ESR team also anticipates the model — which they hope to evolve to focus more on coaching rather than simple review — could easily be adopted at other universities and, perhaps in some form, in industry, where well-intentioned AI developers sometimes don’t know the ethical questions to ask.
Importantly, the ESR is not designed to limit research, to make all proposals risk-free, or to remove all negative impacts from AI development, says ESR team member Margaret Levi. “The ESR raises the questions; the decisions are ultimately up to the individual researcher,” Levi says. “What’s important is that researchers have a good reason for making their decisions and some strategy for mitigating those consequences that are able to be mitigated. What this does is create better scientific discoveries by thinking about what the downstream consequences could be ahead of time. It helps at the front end of research and can also help people down the line, as they make discoveries that turn out to have unintended consequences that couldn’t be anticipated ahead of time.”
The ESR will continue to be a part of HAI grant funding for the foreseeable future, says Landay.
“We think this is pretty innovative, and the results so far have been more positive than any of us would have expected; we’re only hearing enthusiasm for the program,” he says. “Our hope is to really get the model out there so others can see what’s possible and hopefully replicate it at their own universities. I think we’re surely going to see a model of this kind in one form or another start to become the norm in AI development.”
The ESR won’t answer all the ethical and societal issues inherent in AI, but could be a valuable tool for researchers and developers in the field, Landay says.
“There still has to be political will for the policies, laws and other approaches that can limit the negative impact of harmful technology, but this program is changing the conversation, how people are educated, and the culture,” he says. “This isn’t a solution, and it won’t solve all our problems. But it’s a piece of the puzzle that gets us to a better place.”
This work was supported by the Public Interest Technology University Network; Stanford’s Ethics, Society, and Technology Hub; and the Stanford Institute for Human-Centered Artificial Intelligence.
"impact" - Google News
June 24, 2021 at 11:08PM
https://ift.tt/2U1ctci
A New Approach To Mitigating AI's Negative Impact - Stanford University News - Stanford University News
"impact" - Google News
https://ift.tt/2RIFll8
Shoes Man Tutorial
Pos News Update
Meme Update
Korean Entertainment News
Japan News Update
Bagikan Berita Ini
0 Response to "A New Approach To Mitigating AI's Negative Impact - Stanford University News - Stanford University News"
Post a Comment