
PageIndex, a new open-source project by VectifyAI, dramatically improves document analysis for large language models (LLMs) by achieving 98.7% accuracy on the FinanceBench benchmark. This reasoning-based retrieval augmented generation (RAG) system bypasses traditional vector databases and chunking, instead building a hierarchical "table-of-contents" tree index from long documents. It enables LLMs to navigate and extract knowledge in a human-like, explainable manner, addressing a critical need for precision in professional contexts.
Traditional vector-based RAG often struggles with complex professional documents because it relies on semantic similarity, which does not always equate to true relevance. PageIndex tackles this by fostering multi-step reasoning directly over the document's inherent structure. The system's approach stands as a stark contrast to "vibe retrieval," offering a transparent and traceable method for information extraction.
This innovation emerges at a time when the reliability of AI systems faces increasing scrutiny, with reports indicating potential for LLMs to homogenize reasoning and encourage "cognitive surrender," as highlighted in discussions around the Pentagon's adoption of AI tools. PageIndex aims to mitigate these concerns by providing clear, auditable retrieval paths for critical information.
PageIndex simulates how human experts delve into complex documents. It first generates a comprehensive, semantic tree index from any lengthy PDF or Markdown file, transforming documents into structured knowledge maps. This index acts as a dynamic table of contents, allowing LLMs to perform reasoning-based searches through the tree structure, directly locating the most pertinent sections.
The core strength of PageIndex lies in its departure from conventional RAG pitfalls. It operates without a vector database, eliminating the need for embedding generation and storage, and it foregoes arbitrary document chunking. Instead, it respects natural document divisions, leading to more contextually aware retrieval. The system's emphasis on reasoning over simple similarity provides a robust foundation for better explainability and traceability, essential for domain-specific tasks like financial analysis.
This methodology enables LLMs to "think" their way through information, much like an analyst would cross-reference sections of a report. For instance, the system can parse financial reports, regulatory filings, or academic textbooks, which frequently exceed typical LLM context windows. By default, it allows for a maximum of 10 pages per node and up to 20,000 tokens per node, providing granular control over how information is processed.
PageIndex is implemented in Python and is openly available for self-hosting via its GitHub repository. Developers can generate a PageIndex tree from a PDF document by installing dependencies and setting an OpenAI API key. The process involves a single command: `python3 run_pageindex.py --pdf_path /path/to/your/document.pdf`.
The framework supports various customization options, including specifying the OpenAI model (defaulting to `gpt-4o-2024-11-20`), controlling the number of pages checked for a table of contents, and toggling the inclusion of node IDs or summaries. Markdown file support is also provided, leveraging heading levels for structural interpretation, though direct conversion from PDFs using other tools is not recommended due to potential hierarchy loss. For those seeking immediate deployment without local setup, VectifyAI offers PageIndex through a ChatGPT-style chat platform, or via its MCP and API services for broader integration.
PageIndex represents a significant shift in how AI systems interact with and interpret large volumes of information. By prioritizing explicit reasoning and structured navigation, it directly addresses critical concerns about LLM accuracy and the potential for "cognitive surrender," which can plague AI applications. The project's benchmark-topping performance on FinanceBench with 98.7% accuracy underscores its capability to deliver precise results in demanding professional environments.
This reasoning-based approach offers a compelling alternative to opaque vector search methods, providing developers with a tool that enhances both the effectiveness and transparency of RAG systems. As the landscape of AI security evolves, with instances of supply chain attacks targeting open-source projects like Trivy and KICS, PageIndex's focus on clear, auditable retrieval paths could foster greater trust and reliability in AI-powered decision-making. The project signals a broader trend towards more robust, explainable AI, moving beyond mere statistical correlation to genuine understanding.
PageIndex is a novel, open-source retrieval augmented generation (RAG) system by VectifyAI that enhances document analysis for large language models (LLMs). It addresses the limitations of traditional vector-based RAG by enabling LLMs to navigate and extract knowledge from documents in a human-like, explainable manner, achieving 98.7% accuracy on the FinanceBench benchmark. This is especially useful for complex professional documents where precision is critical.
PageIndex differs significantly from traditional RAG by operating without vector databases or document chunking. Instead, it builds a hierarchical, semantic tree index, similar to a table of contents, from long documents. This allows LLMs to perform reasoning-based searches through the document's inherent structure, leading to more contextually aware and traceable information retrieval.
The key benefits of PageIndex include improved accuracy, explainability, and traceability in information retrieval. By eliminating the need for vector embeddings and arbitrary document chunking, PageIndex fosters multi-step reasoning directly over the document's structure. This approach enables LLMs to 'think' their way through information, much like a human analyst, making it suitable for domain-specific tasks like financial analysis.
PageIndex is implemented in Python and is openly available for self-hosting via its GitHub repository. To generate a PageIndex tree from a PDF document, you can install the dependencies, set an OpenAI API key, and run a single command: `python3 run_pageindex.py --pdf_path /path/to/your/document.pdf`. VectifyAI also offers PageIndex through a ChatGPT-style chat platform, as well as via its MCP and API services.
PageIndex is designed to handle large documents by creating a hierarchical index. By default, PageIndex allows for a maximum of 10 pages per node and up to 20,000 tokens per node. This granular control over information processing enables the system to parse financial reports, regulatory filings, and academic textbooks that often exceed typical LLM context windows.
More insights on trending topics and technology



![[KDD'2026] "VideoRAG: Chat with Your Videos"](/_next/image?url=https%3A%2F%2Fres.cloudinary.com%2Fdeilllfm5%2Fimage%2Fupload%2Fv1774511565%2Ftrendingsociety%2Fog-images%2F2026-03%2Fhkuds-s-videorag-transforms-video-into-live-chat.png&w=3840&q=75)



