Skip to main content

Nextcloud has launched its Ethical AI Rating system in response to the increasing risks of computer intelligence. The rating system is designed to be transparent and to give users insight into the ethical implications of a particular integration of AI in Nextcloud.

It provides a quick insight into the software, training data, and trained model, and is based on four levels, ranging from Red to Green, and a point system. The Ethical AI Rating allows users to understand the ethical considerations and potential risks involved in AI use and will help simplify choices for users and customers.

Challenges of AI

The AI field is fast-moving and brings ethical and legal challenges to the table, such as privacy and security of user data, discrimination, and biases. The availability of open-source code for neural network-based AI technologies is no longer enough to guarantee user control and software safety. The set of data used for training and the software used, along with the final model, are all factors that determine the amount of freedom and control a user has.

Ethical AI standards

Nextcloud aims to address these challenges head-on by embracing ethics and transparency in AI use. With the release of Hub 4, the company provides cutting-edge technologies with transparency. The Ethical AI Rating system provides a way for users to understand the ethical implications of AI integration in Nextcloud. The company encourages users to look more deeply into specific solutions, but the Ethical AI Rating system simplifies the choice for most users and customers.

Ethical AI Rating rules

The Ethical AI Rating is designed to give users a quick insight into the ethical implications of AI integration in Nextcloud. The rating system is based on four levels: Red, Orange, Yellow, and Green. The rating system considers factors such as whether the software is open source, whether the trained model is freely available for self-hosting, and whether the training data is available and free to use.

If all three conditions are met, the rating system gives a Green label. If none are met, the label is Red. If one condition is met, the label is Orange, and if two conditions are met, the label is Yellow.

Additionally, the rating system includes one note on bias, pointing out if major biases were discovered in the data set or in the expression of the model. However, other ethical considerations for AI are not included, such as legal challenges around data sets and energy usage.

The Nextcloud Ethical AI Rating system empowers users to make informed decisions and provides a way to simplify the choice of AI solutions for users and customers.

Leave a Reply