Is there ethics in digitalization?

Digitalization has become one of the megatrends most affecting our everyday life. Even if we don’t always realize it, there are algorithms effecting our choices on the day to day basis. Everyone who has used Netflix or Spotify is being suggested content based on the artificial intelligence (AI) technology behind the scenes. People have the opportunity to work over cloud services from wherever they are and consumers expect all services to be available at all times, work fast and without a hitch. The digitalization of societies pushes the boundaries of what is possible and offers a lot of opportunities, but it also challenges our moral boundaries in a way that even if something is technically possible, is it morally right. This is why we aim our Spotlight now to the ethics in digitalization.

Digitalization is changing how companies operate, how services and products are delivered and what services customers are demanding. It is affecting all areas of existing operation models and portfolios and is also creating new business opportunities. The technology is developing fast and new solutions come available with ever growing speed. Solutions such as different cloud technologies, advanced analytics, machine learning and artificial intelligence as well as advanced robotics are already available from several suppliers.

The use of these new digital tools, especially all the variations of the use of artificial intelligence raise a number of ethical concerns, when it comes to the areas of safety, privacy, transparency or integrity, to name only a few.  Considerations over the level of control organizations can retain over the machines’ decision making processes and how to ensure that the AI systems they adopt always act in a way that is in line with the organization’s core values, have been brought forward, and rightfully so.

What does it mean for an AI system to make a decision? What are the moral, societal and legal consequences of their actions and decisions? Can an AI system be held accountable for its actions? These and many other related questions are currently the focus of much attention. The way society and our systems will be able to deal with these questions will for a large part determine our level of trust, and ultimately, the impact of AI in society. AI technologies are not ethical or unethical per se, but these questions should be taken into account in every phase of the design and development processes.

Stora Enso is one of the first companies to join Finland’s Artificial Intelligence Programme organized by the Ministry of Economic Affairs and Employment of Finland, which challenges companies to commit to the ethical use of artificial intelligence. “We in Stora Enso are driven by our Values Lead and Do What’s Right and want to be in the forefront in creating ethical principles for the use of AI”, says Samuli Savo, the Chief Digital Officer of Stora Enso. “We want the development of these technologies to be guided by trust and accountability which are corner stones for responsible business. By working with regulators and policy makers, we have the opportunity to make contribution to agree on a framework of ethics and norms in which AI can thrive and innovate safely”, he continues. You can read more about the challenge and how we are contributing from here.

The old saying that “the ethical considerations should be built-in, not bolted on” is even more true when it comes to the development and implementation of all the new technologies. Take a moment to go over the Spotlight material from here and learn together what to take into account when working in this digitalized world.

Ethics and Compliance

You may also like...