“Artificial intelligence (AI) has both positive and negative impacts, just like any technology. For example, art-generating models such as Stable Diffusion have contributed to artistic innovation and spawned new business opportunities, but their open-source nature also allows for the creation of deep fakes on a large scale, leading to concerns from artists about profit being made from their work.
As we look ahead to 2023, it is uncertain how AI will be regulated and whether new, revolutionary forms of AI like ChatGPT will continue to disrupt industries that were previously thought to be immune to automation.”
Expect more (problematic) art-generating AI apps
There will likely be an increase in art-generating AI apps similar to Lens, the popular AI-powered selfie app from Prisma Labs. However, these types of apps have the potential to be problematic, as they may be susceptible to being tricked into creating inappropriate content and may disproportionately sexualize and alter the appearance of women.
Despite the potential risks, experts believe that the integration of generative AI into consumer technology will continue to be a significant and influential force, with the goal of achieving significant financial success or making a meaningful impact on the daily lives of the general public. However, this may not always be successful.
Artists Spearhead Movement to Reject Data Collections
Artists have been advocating for the ability to opt out of data sets used in the training of artificial intelligence (AI) systems. This issue arose after DeviantArt released an AI art generator that was trained on artwork from its community, leading to a wide range of criticism from the platform’s users due to the lack of transparency in using their art.
While popular AI systems OpenAI and Stability AI claim to have taken steps to prevent infringing content, there is clear evidence that more work needs to be done. Stability AI, which is funding the development of Stable Diffusion, has announced that it will allow artists to opt out of the data set used to train the next iteration of Stable Diffusion.
OpenAI, on the other hand, does not offer an opt-out mechanism and instead licenses image galleries from organizations such as Shutterstock.
In the US, Microsoft, GitHub, and OpenAI are being sued in a class action lawsuit for allowing Copilot, GitHub’s code suggestion service, to replicate licensed code without proper attribution.
It is expected that criticism of AI systems and the use of data sets to train them will continue to increase, particularly as the UK considers new rules that would remove the requirement for publicly trained systems to be used solely for non-commercial purposes.
Open-source and decentralized initiatives will keep gaining traction
There has been a trend in recent years towards a few large AI companies, such as OpenAI and Stability AI, dominating the field.
However, it is possible that this trend may shift in the coming year towards open source and decentralized efforts as the ability to create new systems becomes more widely accessible beyond just large and well-funded AI labs.
This shift towards a community-based approach may lead to more careful scrutiny of AI systems as they are developed and deployed.
Examples of community-driven efforts include EleutherAI’s large language models and BigScience’s efforts, which are supported by the AI start-up Hugging Face. While funding and expertise are still necessary for training and running sophisticated AI models, decentralized computing may eventually compete with traditional data centers as open-source efforts mature.
The Petals project, recently released by BigScience, is an example of a step towards enabling decentralized development by allowing individuals to contribute their computing power to run large language models that would normally require specialized hardware.
However, large labs will likely still have advantages as long as their methods and data are kept proprietary, as seen with OpenAI’s release of the Point-E model, which can generate 3D objects from text prompts but did not disclose the sources of its training data.
Despite these limitations, open source and decentralization efforts are seen as beneficial for a larger number of researchers, practitioners, and users but may still be inaccessible to many due to resource constraints.
AI businesses prepare for upcoming regulations
As AI becomes increasingly prevalent in various industries, there is a growing recognition of the need for regulatory measures to ensure that AI systems are developed and deployed ethically and responsibly.
For instance, the EU’s AI Act and local regulations, such as New York City’s AI hiring statute, are established to tackle potential biases and technical flaws.
It is likely that there will be debates and legal disputes over the details of such regulations before any penalties are imposed.
Companies may also look for regulations that are more beneficial to them, like the four risk categories of the EU’s AI Act.
The categories range from “high-risk” AI systems, such as credit scoring algorithms and robotic surgery apps that must meet certain criteria before being sold in Europe, to “minimal or no-risk” AI systems, such as spam filters and AI-enabled video games, which just need to be transparent about the usage of AI.
Although there are worries that companies could take advantage of the lower-risk categories to avoid inspection and limit responsibilities,.
What the Growing Market Investments Look Out For in 2023
Artificial intelligence (AI) investments may not necessarily be successful, according to Maximilian Gahntz, a senior policy researcher at Mozilla.
He advises caution when developing AI systems that may benefit many people but also potentially harm some individuals, as there is still much work to be done before these systems can be widely released.
Gahntz also emphasized that the business case for AI involves not only fairness but also consumer satisfaction.
If a model produces shuffled, flawed results, it is unlikely to be popular among consumers.
Despite the potential risks, investors seem eager to invest in promising AI technologies.
Several AI companies, including OpenAI and Contentsquare, have recently received significant funding.
While some AI firms, such as Cruise, Wayve, and WeRide, focus on self-driving technology and robotics, others, like Uniphore and Highspot, specialize in software for analytics and sales assistance. It is possible that investors may choose to invest in AI applications that are less risky but also less innovative, like automating the analysis of customer complaints or generating sales leads.