Conversation
There was a problem hiding this comment.
6 issues found across 12 files
Prompt for AI agents (unresolved issues)
Check if these issues are valid — if so, understand the root cause of each and fix them. If appropriate, use sub-agents to investigate and fix each issue separately.
<file name="kong/llm/usage.py">
<violation number="1" location="kong/llm/usage.py:21">
P2: Default pricing tier drops cache cost tracking. The old code derived cache rates from `input_rate` for all models (including the fallback), so unknown models still had nonzero cache costs. Now `_DEFAULT_PRICING` has `cache_write_rate=0.0` and `cache_read_rate=0.0`, silently ignoring cache token costs for any model not in the registry.</violation>
</file>
<file name="kong/llm/limits.py">
<violation number="1" location="kong/llm/limits.py:86">
P1: `time.sleep()` is called while holding `self._lock`, blocking all other threads from both `wait_if_needed` and `record_request` for up to 60 seconds. Compute the required sleep duration under the lock, release it, sleep, then re-acquire and re-check.</violation>
<violation number="2" location="kong/llm/limits.py:88">
P2: `now` and `window_start` are not recalculated after the RPM sleep, so the TPM check uses a stale 60-second window. Token log entries that expired during the RPM sleep are still counted, potentially triggering a redundant second sleep.</violation>
</file>
<file name="kong/banner.py">
<violation number="1" location="kong/banner.py:96">
P2: `provider.value.title()` produces `"Openai"` for the OpenAI provider instead of `"OpenAI"`. Consider using a display-name mapping instead of relying on `str.title()`.</violation>
</file>
<file name="kong/llm/__init__.py">
<violation number="1" location="kong/llm/__init__.py:4">
P1: Unconditional import of `OpenAIClient` will crash at import time when the optional `openai` dependency is not installed. Since `openai` is declared under `[project.optional-dependencies]`, this import should be guarded so users who only use the Anthropic backend aren't broken.</violation>
</file>
<file name="kong/llm/openai_client.py">
<violation number="1" location="kong/llm/openai_client.py:132">
P2: `last_text` is never updated inside the tool-calling branch. If the model requests tools on every round until `max_rounds` is exhausted, the fallback parses an empty string, silently returning an empty/broken `LLMResponse`. Update `last_text` with the message content on each iteration regardless of finish reason, matching the Anthropic client's fallback behavior.</violation>
</file>
Since this is your first cubic review, here's how it works:
- cubic automatically reviews your code and comments on bugs and improvements
- Teach cubic by replying to its comments. cubic learns from your replies and gets better over time
- Add one-off context when rerunning by tagging
@cubic-dev-aiwith guidance or docs links (includingllms.txt) - Ask questions if you need clarification on any suggestion
Reply with feedback, questions, or to request a fix. Tag @cubic-dev-ai to re-run a review.
aede8f2 to
ade430d
Compare
ade430d to
ec8ebb0
Compare
Move time.sleep() outside the lock in wait_if_needed() to avoid blocking all other threads (including record_request) for up to 60 seconds. The method now computes the required sleep duration under the lock, releases it, sleeps, then re-acquires and re-checks conditions in a loop.
There was a problem hiding this comment.
2 issues found across 9 files (changes from recent commits).
Prompt for AI agents (unresolved issues)
Check if these issues are valid — if so, understand the root cause of each and fix them. If appropriate, use sub-agents to investigate and fix each issue separately.
<file name="kong/agent/analyzer.py">
<violation number="1" location="kong/agent/analyzer.py:22">
P0: Circular import: `deobfuscator.py` imports `AnalysisContext, LLMClient, LLMResponse` from `analyzer.py` at module level, so moving this import to the top level creates an `analyzer → deobfuscator → analyzer` cycle that will raise `ImportError` at runtime. The previous lazy import inside `analyze()` was intentional to break this cycle.</violation>
</file>
<file name="kong/evals/metrics.py">
<violation number="1" location="kong/evals/metrics.py:7">
P0: Circular import: `metrics.py` ↔ `harness.py` both import each other at module level. The previous lazy import inside `overall_score()` existed to break this cycle. Moving it to the top level will raise an `ImportError` (or `AttributeError`) at import time.</violation>
</file>
Reply with feedback, questions, or to request a fix. Tag @cubic-dev-ai to re-run a review.
- sqlite def overkill for config rn but more just laying groundwork for future local db saves.
This file contains hidden or bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters.
Learn more about bidirectional Unicode characters
Sign up for free
to join this conversation on GitHub.
Already have an account?
Sign in to comment
Add this suggestion to a batch that can be applied as a single commit.This suggestion is invalid because no changes were made to the code.Suggestions cannot be applied while the pull request is closed.Suggestions cannot be applied while viewing a subset of changes.Only one suggestion per line can be applied in a batch.Add this suggestion to a batch that can be applied as a single commit.Applying suggestions on deleted lines is not supported.You must change the existing code in this line in order to create a valid suggestion.Outdated suggestions cannot be applied.This suggestion has been applied or marked resolved.Suggestions cannot be applied from pending reviews.Suggestions cannot be applied on multi-line comments.Suggestions cannot be applied while the pull request is queued to merge.Suggestion cannot be applied right now. Please check back later.
WIP. Needs setup wizard
Summary by cubic
Adds OpenAI SDK integration and a guided setup wizard with persisted provider config; Kong now runs on OpenAI (
gpt-4o*) or Anthropic with model-aware limits and unified cost/usage. Install theopenaiextra, setOPENAI_API_KEY, runkong setup, thenkong analyze -p openai -m gpt-4o.New Features
OpenAIClientviaopenaiSDK with single, batch, and tool flows; tracks per-model tokens (incl. cached), calls, and cost using provider-specific pricing.kong setupsaves enabled providers and a default in a local SQLite store;analyzerequires setup and auto-selects a provider from CLI flag > saved default > any enabled key.--provider/-pand--model/-m; provider-aware.envloader and key checks supportANTHROPIC_API_KEYandOPENAI_API_KEY.get_model_limits) used for chunking and output caps; Supervisor uses the client’smodelacross batch and synthesis; final stats report per-model usage and total cost.Bug Fixes
wait_if_needed.TYPE_CHECKINGand updating theLLMClientprotocol and module boundaries.Written for commit da05c09. Summary will update on new commits.