返回列表 发布新帖
查看: 42|回复: 0

Julep2Api,支持十几种模型、没有限制

发表于 昨天 18:56 | 查看全部 |阅读模式

立刻注册账号,享受更清爽的界面!

您需要 登录 才可以下载或查看,没有账号?注册

×
研究了一下 https://docs.julep.ai/introduction/julep
支持十几种模型,没有任何限制,模型还蛮多的
  1. ['mistral-large-2411', 'o1', 'text-embedding-3-large', 'vertex_ai/text-embedding-004', 'claude-3.5-haiku', 'cerebras/llama-4-scout-17b-16e-instruct', 'llama-3.1-8b', 'magnum-v4-72b', 'voyage-multilingual-2', 'claude-3-haiku', 'gpt-4o', 'BAAI/bge-m3', 'openrouter/meta-llama/llama-4-maverick', 'openrouter/meta-llama/llama-4-scout', 'claude-3.5-sonnet', 'hermes-3-llama-3.1-70b', 'claude-3.5-sonnet-20240620', 'qwen-2.5-72b-instruct', 'l3.3-euryale-70b', 'gpt-4o-mini', 'cerebras/llama-3.3-70b', 'o1-preview', 'gemini-1.5-pro-latest', 'l3.1-euryale-70b', 'claude-3-sonnet', 'Alibaba-NLP/gte-large-en-v1.5', 'openrouter/meta-llama/llama-4-scout:free', 'llama-3.1-70b', 'eva-qwen-2.5-72b', 'claude-3.5-sonnet-20241022', 'gemini-2.0-flash', 'deepseek-chat', 'o1-mini', 'eva-llama-3.33-70b', 'gemini-2.5-pro-preview-03-25', 'gemini-1.5-pro', 'gpt-4-turbo', 'openrouter/meta-llama/llama-4-maverick:free', 'o3-mini', 'claude-3.7-sonnet', 'voyage-3', 'cerebras/llama-3.1-8b', 'claude-3-opus']
复制代码

8f853ae5ba01c9212f88d416a45c0402cfb2d357_2_1035x34.png
试了下找接口地址组合了一下,但似乎会收到 Internal Server Error?上代码!
  1. import { serve } from "https://deno.land/std@0.208.0/http/server.ts";

  2. const JULEP_API_BASE_URL = "https://api.julep.ai/api";

  3. // Simple function to generate a UUID (for Agent and Session creation)
  4. function generateUuid(): string {
  5.   return "xxxxxxxx-xxxx-4xxx-yxxx-xxxxxxxxxxxx".replace(/[xy]/g, function(c) {
  6.     const r = Math.random() * 16 | 0, v = c === "x" ? r : (r & 0x3 | 0x8);
  7.     return v.toString(16);
  8.   });
  9. }

  10. // List of supported models (from your provided list)
  11. const supportedModels = [
  12.   'mistral-large-2411', 'o1', 'text-embedding-3-large', 'vertex_ai/text-embedding-004',
  13.   'claude-3.5-haiku', 'cerebras/llama-4-scout-17b-16e-instruct', 'llama-3.1-8b',
  14.   'magnum-v4-72b', 'voyage-multilingual-2', 'claude-3-haiku', 'gpt-4o', 'BAAI/bge-m3',
  15.   'openrouter/meta-llama/llama-4-maverick', 'openrouter/meta-llama/llama-4-scout',
  16.   'claude-3.5-sonnet', 'hermes-3-llama-3.1-70b', 'claude-3.5-sonnet-20240620',
  17.   'qwen-2.5-72b-instruct', 'l3.3-euryale-70b', 'gpt-4o-mini', 'cerebras/llama-3.3-70b',
  18.   'o1-preview', 'gemini-1.5-pro-latest', 'l3.1-euryale-70b', 'claude-3-sonnet',
  19.   'Alibaba-NLP/gte-large-en-v1.5', 'openrouter/meta-llama/llama-4-scout:free',
  20.   'llama-3.1-70b', 'eva-qwen-2.5-72b', 'claude-3.5-sonnet-20241022', 'gemini-2.0-flash',
  21.   'deepseek-chat', 'o1-mini', 'eva-llama-3.33-70b', 'gemini-2.5-pro-preview-03-25',
  22.   'gemini-1.5-pro', 'gpt-4-turbo', 'openrouter/meta-llama/llama-4-maverick:free',
  23.   'o3-mini', 'claude-3.7-sonnet', 'voyage-3', 'cerebras/llama-3.1-8b', 'claude-3-opus'
  24. ];

  25. async function handler(req: Request): Promise<Response> {
  26.   const url = new URL(req.url);

  27.   // Handle /v1/models endpoint
  28.   if (url.pathname === "/v1/models") {
  29.     if (req.method !== "GET") {
  30.         return new Response("Method Not Allowed", { status: 405 });
  31.     }
  32.     return listModelsHandler();
  33.   }

  34.   // Handle /v1/chat/completions endpoint
  35.   if (url.pathname === "/v1/chat/completions") {
  36.     if (req.method !== "POST") {
  37.       return new Response("Method Not Allowed", { status: 405 });
  38.     }
  39.     return chatCompletionsHandler(req);
  40.   }

  41.   // Return 404 for other paths
  42.   return new Response("Not Found", { status: 404 });
  43. }

  44. // Handler for the /v1/models endpoint
  45. function listModelsHandler(): Response {
  46.   const models = supportedModels.map(modelId => ({
  47.     id: modelId,
  48.     object: "model",
  49.     created: Math.floor(Date.now() / 1000), // Use current time
  50.     owned_by: "julep-proxy", // Indicate this proxy owns the listing
  51.     // Add other relevant fields if needed, like permission
  52.   }));

  53.   const responseBody = {
  54.     object: "list",
  55.     data: models,
  56.   };

  57.   return new Response(JSON.stringify(responseBody), {
  58.     status: 200,
  59.     headers: {
  60.       "Content-Type": "application/json",
  61.     },
  62.   });
  63. }

  64. // Handler for the /v1/chat/completions endpoint
  65. async function chatCompletionsHandler(req: Request): Promise<Response> {
  66.   const headers = new Headers(req.headers);
  67.   headers.delete("content-length");

  68.   if (!headers.has("Authorization")) {
  69.     return new Response("Authorization header is required.", { status: 401 });
  70.   }

  71.   try {
  72.     const openaiPayload = await req.json();

  73.     // 1. Create a new Agent for this session
  74.     const agentId = generateUuid();
  75.     const createAgentPayload = {
  76.         name: `temp-agent-${agentId}`, // Use a unique name
  77.         about: "Temporary agent created for a chat session."
  78.         // Add other default agent properties if desired
  79.     };
  80.     const createAgentResponse = await fetch(`${JULEP_API_BASE_URL}/agents/${agentId}`, {
  81.         method: "POST",
  82.         headers: headers,
  83.         body: JSON.stringify(createAgentPayload),
  84.     });

  85.     if (!createAgentResponse.ok) {
  86.         console.error("Failed to create agent:", await createAgentResponse.text());
  87.         return new Response("Failed to initialize chat session (agent creation failed).", { status: createAgentResponse.status });
  88.     }
  89.     // const agent = await createAgentResponse.json(); // Agent details if needed

  90.     // 2. Create a new Session using the created Agent
  91.     const sessionId = generateUuid(); // Use UUID for Julep Session ID path parameter
  92.     const createSessionPayload = {
  93.         agent: agentId, // Link the session to the newly created agent
  94.         // Add other default session properties if desired
  95.         // user: "user-id-if-known", // Optionally link to a user
  96.     };

  97.      // If the client provided a model, include it in the session creation request
  98.     if (openaiPayload.model) {
  99.         // Note: Julep's session creation doesn't directly take a model parameter,
  100.         // the model is typically associated with the Agent.
  101.         // If you need to force a specific model, you might need to update the Agent
  102.         // after creation, or rely on the chat request's model parameter.
  103.         // For now, we'll just create the session and let the chat request specify the model.
  104.     }

  105.     const createSessionResponse = await fetch(`${JULEP_API_BASE_URL}/sessions/${sessionId}`, {
  106.         method: "POST",
  107.         headers: headers,
  108.         body: JSON.stringify(createSessionPayload),
  109.     });

  110.     if (!createSessionResponse.ok) {
  111.         console.error("Failed to create session:", await createSessionResponse.text());
  112.         // Clean up the created agent? This adds complexity. For now, just fail.
  113.         return new Response("Failed to initialize chat session (session creation failed).", { status: createSessionResponse.status });
  114.     }
  115.     // const session = await createSessionResponse.json(); // Session details if needed

  116.     // 3. Initiate the chat using the new session ID (which is now a UUID)
  117.     const julepPayload = convertOpenaiToJulep(openaiPayload);

  118.     const julepUrl = `${JULEP_API_BASE_URL}/sessions/${sessionId}/chat`; // Use the UUID session ID here

  119.     const julepResponse = await fetch(julepUrl, {
  120.       method: "POST",
  121.       headers: headers,
  122.       body: JSON.stringify(julepPayload),
  123.     });

  124.     // Handle streaming response
  125.     if (openaiPayload.stream && julepResponse.headers.get("content-type")?.includes("text/event-stream")) {
  126.       const readableStream = new ReadableStream({
  127.         async start(controller) {
  128.           const reader = julepResponse.body?.getReader();
  129.           if (!reader) {
  130.             controller.error("Failed to get reader from Julep response.");
  131.             return;
  132.           }

  133.           const decoder = new TextDecoder();
  134.           let buffer = "";

  135.           try {
  136.             while (true) {
  137.               const { done, value } = await reader.read();
  138.               if (done) break;

  139.               buffer += decoder.decode(value, { stream: true });

  140.               // Process complete lines (events)
  141.               const lines = buffer.split('\n');
  142.               buffer = lines.pop() || ""; // Keep the last potentially incomplete line

  143.               for (const line of lines) {
  144.                 if (line.startsWith("data:")) {
  145.                   const data = line.substring(5).trim(); // Extract data after "data:"
  146.                   if (data === "[DONE]") {
  147.                     controller.enqueue(`data: [DONE]\n\n`);
  148.                   } else {
  149.                     try {
  150.                       const julepChunk = JSON.parse(data);
  151.                       // Use the Julep session ID (UUID) as the OpenAI response ID
  152.                       const openaiChunk = convertJulepChunkToOpenai(julepChunk, openaiPayload.model || "julep-model", sessionId);
  153.                       controller.enqueue(`data: ${JSON.stringify(openaiChunk)}\n\n`);
  154.                     } catch (parseError) {
  155.                       console.error("Error parsing Julep stream chunk:", parseError);
  156.                       // Optionally enqueue an error chunk or log it
  157.                     }
  158.                   }
  159.                 } else if (line.startsWith(":")) {
  160.                     // Ignore comments
  161.                 } else {
  162.                     // Handle other event types if necessary, though "data" is typical for chat streams
  163.                     console.warn("Received unexpected stream line:", line);
  164.                 }
  165.               }
  166.             }
  167.           } catch (error) {
  168.             console.error("Error reading Julep stream:", error);
  169.             controller.error(error);
  170.           } finally {
  171.             reader.releaseLock();
  172.             controller.close();
  173.           }
  174.         },
  175.       });

  176.       return new Response(readableStream, {
  177.         status: julepResponse.status,
  178.         headers: {
  179.           "Content-Type": "text/event-stream",
  180.           "Cache-Control": "no-cache",
  181.           "Connection": "keep-alive",
  182.         },
  183.       });

  184.     } else {
  185.       // Handle non-streaming response
  186.       const julepData = await julepResponse.json();
  187.        // Use the Julep session ID (UUID) as the OpenAI response ID
  188.       const openaiResponse = convertJulepToOpenai(julepData, openaiPayload.model || "julep-model", sessionId);

  189.       return new Response(JSON.stringify(openaiResponse), {
  190.         status: julepResponse.status,
  191.         headers: {
  192.           "Content-Type": "application/json",
  193.         },
  194.       });
  195.     }

  196.   } catch (error) {
  197.     console.error("Error processing request:", error);
  198.     return new Response("Internal Server Error", { status: 500 });
  199.   }
  200. }


  201. function convertOpenaiToJulep(openaiPayload: any): any {
  202.   const julepPayload: any = {
  203.     messages: openaiPayload.messages.map((msg: any) => ({
  204.       role: msg.role,
  205.       content: msg.content,
  206.       name: msg.name,
  207.       tool_call_id: msg.tool_call_id,
  208.       tool_calls: msg.tool_calls ? msg.tool_calls.map((tc: any) => ({
  209.         type: tc.type,
  210.         function: tc.function ? {
  211.           name: tc.function.name,
  212.           arguments: tc.function.arguments
  213.         } : undefined,
  214.         integration: tc.integration,
  215.         system: tc.system,
  216.         api_call: tc.api_call,
  217.         computer_20241022: tc.computer_20241022,
  218.         text_editor_20241022: tc.text_editor_20241022,
  219.         bash_20241022: tc.bash_20241022,
  220.         id: tc.id,
  221.       })) : undefined,
  222.     })),
  223.     tools: openaiPayload.tools,
  224.     tool_choice: openaiPayload.tool_choice,
  225.     recall: openaiPayload.recall,
  226.     save: openaiPayload.save,
  227.     model: openaiPayload.model,
  228.     stream: openaiPayload.stream,
  229.     stop: openaiPayload.stop,
  230.     seed: openaiPayload.seed,
  231.     max_tokens: openaiPayload.max_tokens,
  232.     logit_bias: openaiPayload.logit_bias,
  233.     response_format: openaiPayload.response_format,
  234.     agent: openaiPayload.agent,
  235.     repetition_penalty: openaiPayload.repetition_penalty,
  236.     length_penalty: openaiPayload.length_penalty,
  237.     min_p: openaiPayload.min_p,
  238.     frequency_penalty: openaiPayload.frequency_penalty,
  239.     presence_penalty: openaiPayload.presence_penalty,
  240.     temperature: openaiPayload.temperature,
  241.     top_p: openaiPayload.top_p,
  242.   };

  243.   // Remove session_id from the payload if it was mistakenly included
  244.   delete julepPayload.session_id;

  245.   return julepPayload;
  246. }

  247. // Converts a non-streaming Julep response to OpenAI format
  248. // Added sessionId parameter to use as the OpenAI response ID
  249. function convertJulepToOpenai(julepData: any, model: string, sessionId: string): any {
  250.   const openaiResponse: any = {
  251.     id: sessionId, // Use the Julep session ID (UUID) as the OpenAI response ID
  252.     object: "chat.completion",
  253.     created: Math.floor(new Date(julepData.created_at).getTime() / 1000),
  254.     model: model,
  255.     choices: julepData.choices.map((choice: any) => ({
  256.       index: choice.index,
  257.       // *** Fix: Extract message content from choice.message.content ***
  258.       message: {
  259.         role: choice.message?.role || "assistant", // Use choice.message.role
  260.         content: choice.message?.content || "", // Use choice.message.content
  261.         tool_calls: choice.message?.tool_calls ? choice.message.tool_calls.map((tc: any) => ({
  262.             id: tc.id,
  263.             type: tc.type,
  264.             function: tc.function ? {
  265.                 name: tc.function.name,
  266.                 arguments: tc.function.arguments
  267.             } : undefined,
  268.         })) : undefined,
  269.       },
  270.       finish_reason: choice.finish_reason,
  271.       // logprobs are not typically in non-streaming response choices
  272.     })),
  273.     usage: julepData.usage ? {
  274.       prompt_tokens: julepData.usage.prompt_tokens,
  275.       completion_tokens: julepData.usage.completion_tokens,
  276.       total_tokens: julepData.usage.total_tokens,
  277.     } : undefined,
  278.   };

  279.   return openaiResponse;
  280. }

  281. // Converts a single Julep streaming chunk to OpenAI streaming format
  282. function convertJulepChunkToOpenai(julepChunk: any, model: string, sessionId: string): any {
  283.     const openaiChunk: any = {
  284.         id: sessionId,
  285.         object: "chat.completion.chunk",
  286.         created: Math.floor(Date.now() / 1000),
  287.         model: model,
  288.         choices: julepChunk.choices.map((choice: any) => {
  289.             const openaiChoice: any = {
  290.                 index: choice.index,
  291.                 // Delta structure for streaming chunks
  292.                 delta: {
  293.                     role: choice.delta?.role,
  294.                     content: choice.delta?.content,
  295.                     tool_calls: choice.delta?.tool_calls ? choice.delta.tool_calls.map((tc: any) => ({
  296.                         id: tc.id,
  297.                         type: tc.type,
  298.                         function: tc.function ? {
  299.                             name: tc.function.name,
  300.                             arguments: tc.function.arguments
  301.                         } : undefined,
  302.                     })) : undefined,
  303.                 },
  304.                 finish_reason: choice.finish_reason,
  305.                 // logprobs are not typically in streaming chunks
  306.             };

  307.             // Clean up empty delta fields
  308.             if (openaiChoice.delta.role === undefined) delete openaiChoice.delta.role;
  309.             if (openaiChoice.delta.content === undefined) delete openaiChoice.delta.content;
  310.             if (openaiChoice.delta.tool_calls === undefined) delete openaiChoice.delta.tool_calls;
  311.             if (Object.keys(openaiChoice.delta).length === 0 && openaiChoice.finish_reason === undefined) {
  312.                  delete openaiChoice.delta;
  313.             }

  314.             return openaiChoice;
  315.         }),
  316.     };

  317.     return openaiChunk;
  318. }


  319. // Start the server
  320. serve(handler);
复制代码

自带 API Key,在 https://dashboard.julep.ai/ 点右下角按钮获取
爱生活,爱奶昔~
您需要登录后才可以回帖 登录 | 注册

本版积分规则

  • 关注公众号
  • 添加微信客服
© 2025 Naixi Networks 沪ICP备13020230号-1|沪公网安备 31010702007642号
关灯 在本版发帖
扫一扫添加微信客服
返回顶部
快速回复 返回顶部 返回列表