Skip to main content
GET
/
api
/
public
/
v2
/
requests
/
{request_id}
Get Request
curl --request GET \
  --url https://api.promptlayer.com/api/public/v2/requests/{request_id} \
  --header 'X-API-KEY: <x-api-key>'
{
  "success": true,
  "prompt_blueprint": {
    "prompt_template": {},
    "metadata": {},
    "inference_client_name": "<string>"
  },
  "request_id": 123,
  "provider": "<string>",
  "model": "<string>",
  "input_tokens": 123,
  "output_tokens": 123,
  "tokens": 123,
  "price": 123,
  "request_start_time": "<string>",
  "request_end_time": "<string>",
  "latency_ms": 123,
  "trace_id": "<string>"
}
Retrieve the full payload of a logged request by its ID, returned as a prompt blueprint. This is useful for:
  • Request replay: Re-run a request with the same input and parameters
  • Debugging: Inspect the exact prompt and model configuration used
  • Dataset creation: Extract request data for use in evaluations
  • Cost analysis: Review token usage and pricing for individual requests
The response includes the prompt blueprint (input messages, model configuration, and parameters) along with token counts, timing data, and a trace_id field linking to the associated trace (if the request was logged via tracing or OpenTelemetry).

Authentication

This endpoint requires API key authentication via the X-API-KEY header.

Example

curl -H "X-API-KEY: your_api_key" \
  https://api.promptlayer.com/api/public/v2/requests/12345

Response

{
  "success": true,
  "prompt_blueprint": {
    "prompt_template": {
      "type": "chat",
      "messages": [
        {
          "role": "user",
          "content": [{ "type": "text", "text": "Hello, world!" }]
        }
      ]
    },
    "metadata": {
      "model": {
        "provider": "openai",
        "name": "gpt-4",
        "parameters": {}
      }
    },
    "inference_client_name": null
  },
  "request_id": 12345,
  "provider": "openai",
  "model": "gpt-4",
  "input_tokens": 12,
  "output_tokens": 25,
  "tokens": 37,
  "price": 0.00123,
  "request_start_time": "2024-04-03T20:57:25",
  "request_end_time": "2024-04-03T20:57:26",
  "latency_ms": 1000.0,
  "trace_id": "a1b2c3d4e5f6a7b8c9d0e1f2a3b4c5d6"
}

Headers

X-API-KEY
string
required

API key for authentication.

Path Parameters

request_id
integer
required

The ID of the request to retrieve.

Required range: x >= 1

Response

Successfully retrieved request as a prompt blueprint.

success
boolean

Indicates the request was successful.

prompt_blueprint
object

The request converted to a prompt blueprint format.

request_id
integer

The ID of the request.

provider
string

The LLM provider (e.g. openai, anthropic).

model
string

The model name (e.g. gpt-4, claude-3-sonnet).

input_tokens
integer | null

Number of input tokens used.

output_tokens
integer | null

Number of output tokens generated.

tokens
integer | null

Total token count (input + output).

price
number | null

Cost of the request in USD.

request_start_time
string | null

ISO 8601 timestamp of when the request started.

request_end_time
string | null

ISO 8601 timestamp of when the request ended.

latency_ms
number | null

Request latency in milliseconds, derived from start and end times.

trace_id
string | null

The trace ID associated with this request, if the request was part of a trace.