Legal snags hold back the application of artificial intelligence
News | November 11, 2019
Manufacturing companies, providers of software and sensor technology, R&D departments: to be able to make the step to smart industry, everyone is exploring the possibilities of artificial intelligence (AI). Algorithms should contribute to more efficient production processes, more quality, lower costs and increased safety, but also bring new legal issues with them. This requires entrepreneurs to be more alert when entering into agreements on this point.
Datastream
YP Your Partnerr in Drachten is such a company that is mainly involved in collecting, bringing together and processing (sensor) data. The central management of these data flows is done in its own software platform CARS, with which machines, devices and processes can be monitored, operated and automated remotely. YP Your Partner is currently working on the transition from CARS 7 to 8 – and AI must also find its place in this. 'Now if a threshold value is exceeded, an alarm will go off. That has to be smarter: you not only want to receive the notification that the pressure in the boiler has been exceeded, but also context information. What is the installation date of the boiler, what serial and type number does it have, et cetera. Only then can you plan your maintenance more efficiently', says director Theun Prins.
Convert analog data
According to Prins, YP Your Partner has been measuring data flows successfully for thirty years and also has the tools to determine the context. The big challenge remains to get the system filled. 'At the moment a lot of information about installations is available, for example, in PDF format and cannot be used digitally. If you want to be able to use that analog data, it must be converted into data suitable for algorithms.' In this context, Prins refers to the BIM (Building Information Model) in construction, in which the parties involved enter and share all data relating to installations, materials and the like. 'We also need a system like this for industry. Until now, only small pieces of our knowledge have been used to compare installations or to have them react to each other. This concerns compact algorithms, not complex systems. There is no business case yet for the installed base, but that is a condition for the meaningful application of AI.'
Decision Support
The added value of AI is clear: work faster and more efficiently at lower costs, with predictive maintenance as the magic word. 'Many companies are still in the phase of static and manual rather than interactive reports. We want to move towards fully predictive automated systems. This requires algorithms that generate decision-supporting information based on concrete questions to systems', says Prins. 'That if, for example, a pump breaks down three times in a row, the software asks whether it is not wise to replace that pump. So not only signal, but also advise and then automatically order a new pump as soon as the number of malfunctions exceeds the cost price of the pump. We as humans have yet to get a feel for that.'
Chain of parties
And not only that, being blind to algorithms also has legal consequences, Prins knows. 'Suppose you have implemented such an algorithm and it has worked well ten times. The pump is ordered ten times, but the eleventh time a mega order is suddenly sent out. Who is responsible for that: the one who developed the algorithm or the one who applies it? And how do you deal with that then?' Such a chain of parties – ranging from the owner of the machine, provider of the intelligent software, developer of the algorithm to supplier of hardware and sensors – can complicate the application of AI, confirms André Kamps, independent ICT and privacy lawyer at Kamps Juridisch Advice. 'If YP has devised an algorithm and built it into an application, customers sometimes apply it in completely different processes. Then it is good to properly scope in a user agreement what you will use the application for and to record this. And also to make separate agreements about other applications, so that you are not faced with surprises,' he says.
disagreement
In addition to the question of who is liable for which part, the lack of written agreements can also lead to disagreements about, for example, IP or revenues. Prins illustrates this with a practical example. 'Suppose we have developed an algorithm for a machine builder in a certain domain and another partner discovers that he can also apply it for his domain. Who owns the IP? And how do you deal with the earnings that result from that? So far, we usually do well. If a user earns a lot of money because of that genius, we ask for a predetermined percentage of the profit. But legally we have not yet been able to properly flesh this out. We adjust agreements based on advancing insight.' YP has already defined three types of partnerships for software usage and resale rights. 'Then it is clear what a customer can and is allowed to do with our software. We now have to regulate AI in the same way.'
Appropriate agreements
The need for appropriate agreements will be at least as great in the case of algorithms, Kamps agrees. 'How do you handle confidential data? If a party wants to terminate the agreement, can it still do something with the data? What about the rights if a party chooses to move forward with the competitor? The issue of continuity is also important: what if a party fails to fulfill agreements or fails? Is the system easy to replace? Has an exit strategy been agreed at all? These are all things that you have to inventory in advance and adjust your revenue model to where necessary.' Other legal aspects that deserve attention are the processing of personal data (for example when measuring the productivity of operators) and pricing in the chain.
Increasing unpredictability
As company software systems are increasingly linked together, the use of algorithms is becoming more unpredictable. And because algorithms are becoming more complex and smarter/self-learning, their output is also more difficult to foresee. 'This complicates agreements about liability. In the Civil Code, damage is linked to the product or its owner; the question is whether this can be applied to self-learning algorithms', says Kamps. 'If you, as an entrepreneur, opt for algorithms, you have to think about strict liability and take out insurance for it. However, insurers have little experience with misinterpreted or applied algorithms and are probably still hesitant. Perhaps in the near future, parallel to the supplementary cyber policy, an information expert will assess the algorithm for acceptance in the insurance.'
Linking sources
In the run-up to more intensive use of AI, Prins wants to take a number of steps with YP. 'First of all, the linking of all information sources, the even better virtualization of physical reality and, finally, far-reaching integration of humans, which is made easier thanks to mobile telephones and interconnectivity,' he says. 'People have to enter data and provide feedback on causes in a disciplined way, which is essential for root cause analysis. That data flow is also necessary to bring AI to the level at which automated processes are possible. From data to action, fully automated.'
This article was published in Link Magazine from October 2019.