使用亚马逊基岩自动推理检查

为了提高大语模型(LLM)响应的事实准确性,AWS宣布Amazon Bedrock自动推理检查(在Gated Preview)上,网址为AWS RE:Invent 2024。在这篇文章中,我们讨论了如何帮助防止使用Amazon Bedrock自动化的推理检查来防止生成的AI幻觉。

来源:亚马逊云科技 _机器学习
基础模型(FMS)和生成AI正在改变行业的企业运营。 McKinsey&Company的最新研究估计,生成AI每年可能每年为全球经济贡献高达4.4万亿美元,通过提高运营效率,每年的生产率增长0.1%至0.6%,通过个性化互动来改善客户体验,并加速数字化转型,组织在AI幻觉中与AI幻觉相比,在将AI幻觉范围内挣扎在生成AI生产环境中时,其幻觉却在努力。 AI系统产生合理但不正确的信息的模型幻觉仍然是主要问题。 The 2024 Gartner CIO Generative AI Survey highlights three major risks: reasoning errors from hallucinations (59% of respondents), misinformation from bad actors (48%), and privacy concerns (44%).To improve factual accuracy of large language model (LLM) responses, AWS announced Amazon Bedrock Automated Reasoning checks (in gated preview) at AWS re:Invent 2024. Through基于逻辑的算法和数学验证,自动推理检查验证LLM输出对自动推理策略中编码的域知识的验证,以帮助防止事实不准确。自动推理检查是Amazon Bedrock Guardrails的一部分,该框架还提供内容过滤,个人身份信息(PII)修订和增强的安全措施。 Together, these capabilities enable organizations to implement reliable generative AI safeguards—with Automated Reasoning checks addressing factual accuracy while other Amazon Bedrock Guardrails features help protect against harmful content and safeguard sensitive information.In this post, we discuss how to help prevent generative AI hallucinations using Amazon Bedrock Automated Reasoning checks.Automated Reasoning overviewAutomated Reasoning is a specialized branch of computer使用数学证明技术和正式L