LLM workflows¶
Hamilton is great for describing dataflows, and a lot of “actions” you want an “agent” to perform can be described as one, e.g. create an embedding of some passed in text, query a vector database, find the nearest documents, etc.
The benefit of using Hamilton within an LLM Powered app is that:
you can visualize the dataflow.
you can easily test, modify, compose, and reuse dataflows. For example, you can easily test the dataflow that creates an embedding of some text without having to worry about the rest of the dataflow.
you can easily swap out the implementation details of components surgically. For example, you can swap out the vector database client based on configuration, this helps in ensuring you can quickly and easily modify/update your dataflow and have confidence around the impact of that change.
you can use functionality like runtime data quality checks/extend Hamilton’s capabilities with your own needs to inject/augment your dataflow with additional functionality, e.g. caching, logging, etc.
you can request the intermediate outputs of a dataflow by requesting it as output without any surgery required to any of your code to do so. This is useful for debugging.
The following examples show how to use Hamilton for LLM workflows:
PDF Summarizer which shows a partial RAG workflow (just missing going to a vector store to get the PDF/content) that runs inside FastAPI with a Streamlit frontend.