View Application Details
To drill down into the details about your Ocean for Apache Spark application, start with the Overview tab, which gives you quick access to insights and summary data about the application. You can obtain an overview of your current cost, efficiency status, app metrics, and access to logs. You can view more details about the app in additional tabs including its configuration and a listing of Spark issues.
To get to the App Overview tab, do the following:
- In the Spot console, go to Ocean for Spark in the menu tree and click Applications.
- In the list of applications, click an app name.

The App page opens with the Overview tab open and the app name at the top. Next to the App name, a status icon indicates the App status.
The App Overview includes the following main areas:
- Metrics
- App Info
- Logs
Metrics
Application Metrics is a summary line providing data about your app usage. The following information is presented:
- Cloud Compute Cost: The cloud provider’s compute costs incurred by this application.
- Core Hours: The core resources used by the application. This metric is calculated as the sum over each container (driver or executor) of its uptime duration multiplied by the number of cores allocated to it.
- Data Read: Amount of data read by this application.
- Data Written: Amount of data written by this application.
- Duration: Amount of time this application has run.
- Efficiency Score: The fraction of the time that Spark executor cores are running Spark tasks.
App Info
The App Info area gives you a quick point of reference for vital information about the application.

You can edit the App Name by clicking the edit icon by the name.
Insights
The Insights area gives information about the resource usage of the application over time. The first tab shows executor CPU usage, broken down by categories (CPU, I/O, shuffle, GC, Spark internals). This graph aligns with a timeline of your Spark jobs and stages, so that it's easy to correlate CPU metrics with the code of your Spark application.

The second tab provides a report of the memory usage of your spark executors over the application's job and stages timeline. On the left-hand side, you can see the peak memory usage over the total available physical memory for each executor, broken down by category (JVM, Python, Other). This graph helps you tune your container memory sizes - so that memory usage stays in the 70-90% range. Click the executor list to view detailed memory usage for that executor in the bottom graph.
The memory usage depicted in this graph is different from the memory reported in the Spark UI.
- The graphs in this tab report the Resident Set Size (RSS) memory used by Spark and its child processes.
- RSS (Resident Set Size) refers to the amount of physical memory (RAM) that a process is currently using.
- This memory allows a process to perform operations quickly without relying on slower disk storage.
- In Apache Spark, the
processTreeRSS
metric is categorized into:- Java
- Python
- Other (e.g., R)
- These categories reflect the different programming languages Spark can use, each with its own memory management.
- RSS measures all memory used by a process, not just specific types like on-heap or off-heap memory.
- RSS also includes:
- Program code
- Stack memory
- Mapped memory
- Shared libraries
- Other memory types

Logs
You can view the Driver Logs or the Kubernetes Logs while the application is running. You can also download the logs once the application has finished running.
If you want to change the severity level of your Driver logs, you can do this easily from your Spark application code, for example by setting sc.setLogLevel("DEBUG")
.
View Configuration
To view the configuration, click the Configuration tab.
View Spark Issues
Click the Spark Issues tab to see a list of all the issues with error messages. Click on an issue to expand the card and view more detailed information about the error or warning.

Related Topics
Learn more about monitoring jobs.