Skip to main content

May 2024 - Beta release of TypeScript v2 and PHP, new reduced starter plan pricing

In this month's release notes, we are announcing the beta release of PHP and v2 of our Typescript SDK generation, a new reduced pricing model for our starter plan, the ability to integrate SDK publishing with your CI/CD, merge capabilities for your OpenAPI spec, and more!

2 cute plushie llamas holding a sign with the TypeScript and PHP logos on it. The background has cherry blossom

Released this month

PHP beta

We're excited to add a new language to our SDK generation capabilities, PHP! Most of the features in our other languages are available in PHP, and we're close to finalizing the SDK surface area. We're looking for feedback on the PHP SDK, so please give it a try and let us know what you think.

To enable PHP SDK generation, you need to add "php" to the languages option in your liblab config file, as well as setting the packageName option in the languageOptions/php section:

liblab.config.json
{
"specFilePath": "petstore.json",
"languages": [
"php"
],
"languageOptions": {
"php": {
"packageName": "company/sdk"
}
}
}

The generated SDKs support PHP 8.0 and up.

note

This is a beta release, so there may be breaking changes in the future in PHP hooks or in the generated SDKs.

TypeScript v2 beta

We've updated our TypeScript SDK generation to v2.

This update includes a number of improvements, including:

Model validation using Zod

Zod is a TypeScript-first schema declaration and validation library. We've integrated Zod into our TypeScript SDK generation to provide model validation.

This validation can enforce the validation rules in your OpenAPI spec SDK side, rather than sending invalid requests to your API. For example, if you have a schema for a llama with a rating from 1 to 5:

llamastore.json
{
"components": {
"schemas": {
"Llama": {
"type": "object",
"properties": {
"rating": {
"type": "integer",
"minimum": 1,
"maximum": 5
}
}
}
}
}
}

The generated TypeScript SDK will validate this when you create the model:

src/services/llama/models/llama.ts
export const llama = z.object({
rating: z.number().gte(1).lte(5),
});

Custom retries

As well as supporting retries through configuration, in the SDKs generated with TypeScript v2, your SDK users can now provide custom retry configurations for every API call.

For example, if you have a method to get all llamas from a llama service, the generated method signature will look like this:

src/services/llamas.ts
export class LlamaService extends BaseService {
async getLlamas(requestConfig?: RequestConfig): Promise<HttpResponse<Llama[]>> {
...
}
}

The RequestConfig object can be used to provide custom retry configurations for this specific API call.

src/http/types.ts
export interface RequestConfig {
retry?: RetryOptions;
}

export interface RetryOptions {
attempts: number;
delayMs?: number;
}

Documentation on this is coming soon, so please reach out if you need help with this feature.

Improved name generation

We've improved name generation to provide better names for models, services, and methods in the generated SDKs.

Enable TypeScript v2 SDK generation

To enable TypeScript v2 SDK generation, you need to add "liblabVersion": "2" to the languageOptions/typescript option in your liblab config file:

liblab.config.json
{
"specFilePath": "petstore.json",
"languages": [
"typescript"
],
"languageOptions": {
"typescript": {
"liblabVersion": "2"
}
}
}
note

TypeScript v2 is a major version update, so there are breaking changes between SDKs generated with v1 and v2. We will be supporting both versions for the foreseeable future, so you can choose when to upgrade.

If you need help with the upgrade, please reach out either using out contact form or via our discord server.

This is a beta release, so there may be breaking changes in the future in TypeScript v2 hooks or in the generated SDKs.

Reduced starter plan pricing

We want to make it easier for you to get started with liblab, so we've reduced our pricing for our starter plan. Our new pricing is $100 a month paid annually, or $120 a month paid monthly, for unlimited SDK and documentation builds across unlimited endpoints. And as always. you get your first 15 SDK generations for free.

Check out our pricing page for more details.

Other improvements and bug fixes

Template repos for your control and SDK repos

When you set up liblab in your CI/CD pipeline, the typical way is to have a control repo containing your config file, hooks code, and API spec, and SDK repos containing the generated SDKs. You would then use your CI/CD pipeline to generate the SDKs when your spec changes, and raise pull requests. After reviewing and merging, you would create a release in your SDK repo, which would then be published to your package manager.

To help you get started, we've create GitHub template repos for both control and SDK repos. You can find them here:

Repo typeTemplate repo link
Control repoliblaber/control-repo-template
C# SDK repoliblaber/csharp-sdk-template
Go SDK repoliblaber/go-sdk-template
Python SDK repoliblaber/python-sdk-template
TypeScript SDK repoliblaber/typescript-sdk-template

More template repos will be added in the future.

Updated tutorial - Publish your SDKs via Github Actions

We've updated our End-to-end SDK generation and publishing with GitHub Actions to provide a lot more details, including some nice flow diagrams and links to the template repos. This covers everything from setting up your control repo all the way to publishing your SDKs to a package manager. With this tutorial you can set up an automated end to end pipeline from your API spec to a published SDK in only a few minutes.

We've also added a video walkthrough of the tutorial on the liblab YouTube channel:

New Tutorial: Build a RAG AI app using SDKs

We've added a new tutorial for AI app builders. One of the hottest topics in AI at the moment is RAG - retrieval augmented generation. This is a technique that allows you to combine a LLM like ChatGPT with data from your own sources. For example, you can ask an LLM to provide an overview of product reviews for your products, and use RAG to send the relevant product data to the LLM to generate the overview.

One way to implement RAG is to connect an API to your AI app, and the easiest way to do this is to use SDKs. This tutorial shows how to do this, generating an SDK for an API, then using this in an AI app to provide data to a LLM.

This tutorial uses Microsoft Semantic Kernel as a AI app framework, and ChatGPT from OpenAI as the LLM. The app you build is in C#, using a C# SDK generated from the Cat Facts API spec.

liblab merge and bundle for your OpenAPI spec

One thing we have learned from our customers is that they want to combine multiple API specs into one SDK. Sometimes this is because they have multiple APIs that use a shared set of schemas, other times it's because they have multiple smaller APIs that they want to combine in one larger SDK, or one spec references schemas in another spec.

To help with this, we've added 2 new commands to the liblab CLI - merge and bundle.

  • merge allows you to merge multiple API specs into one. You give the command a folder of specs, defining the 'main' spec that contains the core API details (such as the info section), and a merged spec will be generated.
  • bundle allows you to bundle an API spec that has remote references to schemas. This will create a new spec with all the remote references inlined.

Once you have your merged or bundled spec, you can generate an SDK as normal.

Custom parameters for your liblab hooks

Hooks are a very powerful way to build customizations into your SDKs. As we've been working with customers on their hooks, we've found there is a need to be able to provide custom parameters from the SDK user to the hooks code, for example when implementing custom authentication logic.

To help with this, we've added the ability to add additional constructor parameters in your config file. These parameters are defined on the SDK client, and are passed to the hooks. At the moment this is just for v2 Python SDK generation, but we will be adding this to other languages in the future.