Discover how ABAP developers can enhance their RAP skills using NotebookLM, Google’s AI-powered research assistant.
This article is the second part of LeverX's exploration into how NotebookLM and its integrated Large Language Model (LLM) can accelerate the learning process for the Restful ABAP Programming model (RAP). You can find the first part of our research here.
If you're not yet familiar with NotebookLM, we strongly recommend reading the initial article to learn about its features and potential.
In this installment, LeverX presents a new collection of advanced learning scenarios developed to deepen RAP understanding further.
Short disclaimer
This article reflects LeverX's personal experimentation and evaluation of whether integrating NotebookLM into a personalized learning workflow is worthwhile. The short answer: yes, it is — but with certain caveats. At the very least, it's worth a try.
Scenario 1: Riddles – Engaging Cognitive Reasoning
This scenario is essentially the inverse of the “Term-Definition Game” discussed in Part 1.
This exercise engages a different mode of cognitive reasoning. Instead of presenting a term for definition, we prompted the LLM to generate a descriptive riddle about a specific aspect of an RAP Business Object (BO). Our task was to infer and identify the underlying RAP concept embedded in the response.
To illustrate, our experts asked the LLM to create three riddles, which we then attempted to solve. We provided a critical instruction to avoid including any explicit clues in the riddle text that might reveal the answer too easily, since the model occasionally introduces hints that make the solution overly obvious.
Our prompt: create riddles
NotebookLM responded with riddles related to Validation, ETag, and Action.
NotebookLM response: Created riddles
Our prompt: answer the riddles
While most of our answers were correct, a moment of hesitation occurred on the third one, unsure if the concept was Action or Function. The phrase "manipulate data" was initially missed, which proved to be a key differentiator.
NotebookLM response: riddle, partially correct answer
The LLM’s response not only recognized a completely incorrect answer but also identified a partially correct one. In the explanation, it outlined both the commonalities and distinctions between Action and Function. Moreover, it provided guidance on how to interpret the riddle more effectively, highlighting the phrase “manipulate data” as a critical indicator. This phrase implies data modification, which is a defining characteristic of an Action in RAP, helping to differentiate it from a Function.
Scenario 2: Documentation Syntax Explanation – Decoding Formal Language
Reading ABAP keyword documentation for RAP can sometimes be overwhelming, particularly due to the highly formalized syntax. For instance, the official definitions of standard RAP Business Object operations, such as create, update, and delete, often include numerous modifiers and optional clauses. Navigating all of these variations can be challenging, especially when trying to extract the essential meaning.
Example: RAP BO standard operations
To test this, a snippet from the official documentation served as input, which the LLM parsed and then explained line by line. Initially, the response was overly verbose, so we prompted it several times to produce more concise output. The final result was a much more streamlined explanation.
Our prompt: explain documentation syntax
For this scenario, the formal definition of a RAP Non-Factory Action served as the input.
The output was:
NotebookLM Response: Documentation Syntax, Action part 1
In its response, the LLM did a commendable job explaining not only the RAP-specific keywords—such as internal or static additions related to actions—but also elements of the ABAP documentation syntax, like the use of enclosing parentheses (...). However, it did not fully address all aspects of the formal syntax. Constructs such as brackets [], braces {}, {I}, and other composite notations were omitted.
Conversely, you can do the opposite and ask the LLM about the formal syntax of the ABAP keyword documentation so that everything is clear to you.
NotebookLM response: documentation syntax, action part 2
Essentially, the LLM walks through the provided syntax and offers a word-by-word explanation. However, it is critical to review the output carefully. During experimentation, several instances were encountered where the phrasing lacked precision and required refinement. It is recommended to save the response as a note, revise it to resolve any ambiguities or inconsistencies, and use the polished version as a resource to strengthen your understanding of RAP.
Scenario 3: Code Explanation – Understanding Logic Step-by-Step
The objective here is to provide NotebookLM with a code snippet and receive a detailed, step-by-step explanation along with a breakdown of the specific RAP concepts involved. Each of them can be further reviewed individually.
Our prompt: explain the code
The example code was sourced from the behavior implementation of /DMO/I_TRAVEL_M. Ultimately, the LLM returned a structured walkthrough of the logic as follows:
NotebookLM response: сode explanation, validate_agency
The explanation provided by the LLM is generally sound and follows the logic of the code in reasonable detail. However, one phrasing stood out as imprecise, specifically, the statement: “agency_id does not exist in another table or database result called agencies_db”. This wording feels somewhat artificial.
The likely cause is that the LLM attempted to independently interpret the NOT line_exists(...) expression, along with the specific statement NOT line_exists( agencies_db[ agency_id = travel-agency_id] ), and then merged the two interpretations. The result is a hybrid explanation that lacks clarity and precision.
This example highlights a common limitation of LLM-generated content, underscoring the importance of critically reviewing all outputs. That said, such discrepancies can be instructive, as they prompt deeper engagement with the material. In fact, when faced with an inaccurate explanation, you can challenge the model directly by asking why it arrived at that conclusion, using the error as a learning opportunity.
Scenario 4: RAP Concept Extraction – Identifying Key Learning Areas
Continuing from the previous prompt, a request was made to NotebookLM to extract and list RAP concepts present in the code sample.
NotebookLM response: concept extraction, validate_agency
This yielded a structured list of topics worthy of further exploration. For example, one could now ask follow-up questions like:
- What is the transactional buffer? What role does it play in RAP?
- Who implements the transactional buffer?
- How does it differ between managed and unmanaged implementations?
This technique helps identify study priorities and focus your reading of official SAP documentation.
Bonus Scenario: Integration with Other LLMs – Expanding Your AI Toolkit
NotebookLM can also serve as a prompt generator for other LLMs like ChatGPT, Perplexity, Gemini, or Claude.
NotebookLM response: prompt generation
Additionally, various styles of questioning can be experimented with. For instance, the LLM can be asked to generate a series of chained questions that progressively explore a specific RAP concept in greater depth. The entire prompt can be submitted as a single block, or one can proceed step-by-step, engaging with each question individually. This method is highly adaptable and can be used to generate tailored prompts for any LLM, such as Perplexity, Google Gemini, Claude, and others.
Another valuable integration strategy involves using ChatGPT for fact-checking. As noted earlier, NotebookLM delivers an estimated accuracy of around 90% (based on personal experience rather than formal benchmarking). To ensure reliability, you can cross-reference its outputs with ChatGPT, using it as a secondary validation layer to catch potential inaccuracies.
ChatGPT prompt: fact-checking request
A cheat sheet on RAP numbering is available. The history of this cheat sheet can be found in our previous article.
ChatGPT output: fact-checking result
Also, keep in mind that you will have to check everything manually.
Conclusion
In Part 1, a full list of pros, cons, and workflow suggestions was outlined. Please refer back to it for a detailed breakdown.
Here, it's important to reiterate the core principle:
NotebookLM is a powerful AI-enhanced learning tool, but its outputs must be critically reviewed and verified before use. AI is not a replacement for developer expertise but a multiplier for those willing to guide it.
Is this approach recommended? Yes, cautiously. It's definitely worth trying, especially if you use the LLM as a learning partner rather than blindly relying on it.
How useful was this article?
Thanks for your feedback!