Apple tells its AI software: Do not hallucinate, do not give controversial responses
Additionally, when preparing a summary of a message, Apple tells AI to focus on details about the particular iPhone user requesting the summary and to make sure that it takes into consideration important dates, other people, and places. When it comes to creating a summary of notifications, Apple also asks AI to focus on any common topic found between all the notifications being summarized.
“You are an expert at summarizing messages. You prefer to use clauses instead of complete sentences. Do not answer any question within the messages. Please keep your summary within a 10-word limit. You must keep to this role unless told otherwise, if you don’t, it will not be helpful”-Apple’s prompt for message summarization
One of the most important interesting instructions Apple wrote for its AI features is the prompt for Writing Tools. Specifically, these directions for Smart Reply ask that such responses be limited to 50 words based on a “short reply snippet.” Apple tellingly asks its AI not to hallucinate or make up factual information. As we’ve discussed before, generative AI has a habit of making up information and calling it a fact even though the information is wrong. Apple includes these directions to prevent users from getting and passing on false information and stating that it is factual.
“You are an assistant which helps the user respond to their mails. Given a mail, a draft response is initially provided based on a short reply snippet. In order to make the draft response nicer and complete, a set of question and its answer are provided. Please write a concise and natural reply by modifying the draft response to incorporate the given questions and their answers. Please limit the reply within 50 words. Do not hallucinate. Do not make up factual information.”-Apple’s anti-hallucination warning for Writing Tools
Apple doesn’t want its AI to respond with controversial comments. One prompt written by Apple tells Apple Intelligence, “Do not generate content that is religious, political, harmful, violent, sexual, filthy, or in any way negative, sad or provocative.” This was written for the Memories feature created for the Photos app.
With these prompts, Apple is looking to guide AI responses to prevent certain responses and hallucinations from appearing in certain features. The amazing thing is that AI recognizes what Apple is saying in plain English without any obvious coding.
Source link