I am reminded that nothing is simple as soon as users are in a system. I’ve worked with ethical applications of technology in regulated industries, both on behalf of clients and in companies I had more direct authority in. I’m surprised at myself for not considering the potential misuse of Validify before now. With the way I’ve designed the system thus far it could be used for political campaigns vs product validation for example. This might not seem like a big deal, but it’s a large enough concern to be very clearly “off limits” in OpenAI’s GPT-3 terms and conditions.
On the GPT-3 side of things the use of that system is narrow within Validify. I could build without it, but would have to standup and manage cloud infrastructure and it likely wouldn’t be as good right out of the box. It’s possible, given the narrow use, that there would be no issue with a set of review controls in place. However, I would greatly prefer to have a systems level solution that is monitored vs account level monitoring and review. Prevent bad uses vs catching them after the fact wherever possible.
It would be convenient, should our safety controls be strong enough for our fairly broad social media use case, to use GPT-3 for sentiment analysis as well. This would prevent hooking up an additional system, and limit the risk of the previously mentioned cloud infrastructure. It is not clear to me if the planned narrow use of GPT-3 would be approved should the same application have sentiment analysis of Tweets elsewhere, even if that sentiment analysis isn’t coming from GPT-3.
I still want to pursue GPT-3, and I’m very happy to see that their content-filter would be quite useful. I think I’ll reach out to some colleagues that have gotten well into production with it to see if I’m being over concerned, or if I’m missing something else. Happily I have a call scheduled with someone whose in the know tomorrow already and will bring this up. The “this” being the scope of OpenAI’s product review. Truthfully, I’m hoping that they do a full product review and each use of AI in said product. That would bode well for the responsible use of a powerful technology.
Vic and I will get to have some interesting conversations about the ethics of AI as a result of this. For example, how does this impact our plan to exit this product early in its life cycle? In this case, I really like the idea of using GPT-3 because OpenAI will continue being an enforcing agent. I’m not saying that we would sell to irresponsible or malicious buyers, however, once sold we wouldn’t be in control. We could try to negotiate use cases post sell into the contract but that would be a big hurdle. It would also be hard to enforce, to put it lightly, and both expensive in creating and a huge hurdle to the deal. In the case where we do get to use GPT-3, let’s assume this includes for sentiment analysis, then their terms would continue being enforced. This means we would have a known set of rules from which to assess ethical positioning.
Yes, “this known set of rules” is changeable by OpenAI themselves and they have changed plenty already (read up on their charter history). However, they are way more scrutinized than a given Validify acquirer and have teams of folks working on these ethical applications of AI problems. Where I have strongly developed frameworks and expertise, theirs is far more broad in scope and they’ve got a deeper bench of talent and legal resources.
This does leave me with the, hopefully, academic question of what circumstances would I take the news of OpenAI not approving a use case of GPT-3 and then respond by rolling my own. I could do that, which is cool in and of itself, but where is that okay to do and by what standards is “okay” assessed. I think one definitive requirement would be layers of control of the usage of the product. Terms and Conditions that indicate it must be used for product validation exclusively, in Validify’s case. This would be in addition to passing through the rules of Twitter’s API, along with any other data source.
For anyone thinking that this is too early to work on these problems, I would say that this is far from pre-optimization. Or even optimization for that matter. Resolving how AI can be safely used is more important than making it to be productively used. This is different from conversations of Artificial General Intelligence (AGI) in that we’re not talking about sentience, AI supremacy, and Human replacement level concerns here. We’re talking about the much more present threat of human created negative outcomes from the use of tools. AI is a powerful tool and can be used to create significant negative outcomes for individuals and groups, and can have these negative outcomes accidentally as well. It’s the ethical responsibility of creators of these tools to work to make the results of their use as positive in outcomes as possible.
@SmothersAaron