For real-world, business applications powered by large language models (LLMs), access to live data is crucial. Traditionally, this has involved hard-coding data chunks as document embeddings and maintaining synchronization with a vector database. However, this approach may not always be feasible, particularly when prompt grounding is limited by token constraints, data structures, or the need for dynamic queries. In such cases, direct database access from an LLM becomes necessary. In this talk, we will explore tools and strategies utilizing LangChain and Semantic Kernel as orchestrators. We’ll highlight the risks associated with inadequate security measures, such as using low-privilege access and the potential for prompt injection. Additionally, we will discuss possible mitigations to ensure secure and efficient access to live data, enabling developers to build robust and responsive LLM-based applications.