Writing Tests for Blocks
This guide explains how to write tests for Platforma blocks. It assumes you understand block structure (model, workflow, UI). For block development concepts, see the Block Development Walkthrough.
Introduction
Block tests connect to a real Platforma backend and simulate user actions: adding blocks, setting inputs, running blocks, and checking outputs. Unlike unit tests that test code in isolation, block tests verify the entire block behavior in a realistic environment.
Setting Up Your Test Environment
Prerequisites
- Platforma backend instance running (see Run Local Backend)
- Test package with
@platforma-sdk/testpackage installed as dependency (should already exist in your block)
Test Structure
Each block skeleton already has separate package for block's tests: test directory contains everything to start testing your block.
Tests are typically written in test/src/wf.test.ts.
Test Framework Overview
The @platforma-sdk/test package provides the testing infrastructure.
To declare a test, use special blockTest function.
Each test you declare is isolated: it receives fresh empty project that is cleaned up automatically after test successful finalisation.
blockTest is an extension for standard vitest's test function with Platforma-specific fixtures.
Test Fixtures
Test fixture is special 'magic' argument, that is provided to each test out of the box and is available during test execution.
Here is how you can define a test and receive standard test fixtures:
import { blockTest } from '@platforma-sdk/test';
blockTest('empty imputs', async ({ rawPrj: project, ml, helpers, expect }) => {
// rawPrj (or project): special project created for test. You can add block into it, run blocks and so on.
// ml: MiddleLayer instance for accessing drivers and other primitives.
// helpers: utility functions for common operartions
// expect: standard vitest's assertion library
});
You are not required to specify all fixtures in each test and can select only those you really use in your test. If test does not require access to drivers/pframes via ml fixture - just not list it in test's declaration.
Your First Test: Initial Block State
import { blockTest } from '@platforma-sdk/test';
import { blockSpec } from 'this-block';
blockTest('empty imputs', async ({ rawPrj: project, ml, helpers, expect }) => {
// Add block to a project available as test fixture
const blockId = await project.addBlock('Block', blockSpec);
// Get helper for accessing block state.
const blockState = project.getBlockState(blockId);
// Wait for some stable block state to work with (OK or error).
// As we run awaitBlockDone() earlier, provides the state immediately.
const stableState = await blockState.awaitStableValue();
// Block outputs structure and fields are controlled by block's model.
const outputs = stableState.outputs;
// Standard vitest expectation checks are available in 'expect' fixture
expect(outputs).toMatchObject({ inputOptions: { ok: true, value: [] } });
});
Run the block to get its outputs
Here's the simple test that adds a block and checks its result after running:
import { blockTest } from '@platforma-sdk/test';
import { blockSpec } from 'this-block';
blockTest('empty inputs', async ({ rawPrj: project, helpers, expect }) => {
// Add block to project
const blockId = await project.addBlock('Block', blockSpec);
// 'Click' 'Run' button
await project.runBlock(blockId);
// Wait for block to finish its work (calculationStatus === 'Done')
await helpers.awaitBlockDone(blockId);
// Get helper for accessing block state.
const blockState = project.getBlockState(blockId);
// Get 'moment' block state. Throws an exception if block contains an error.
// As we run awaitBlockDone() earlier, this is the final state of a block.
// Helps for block state inspection if it does not fail with error,
// but behaves strange.
console.dir(await blockState.getValue(), { depth: 5 });
// Wait for some stable block state to work with (OK or error).
// As we run awaitBlockDone() earlier, provides the state immediately.
const stableState = await blockState.awaitStableValue();
// Standard vitest expectation checks are available in 'expect' fixture
expect(stableState.outputs).toStrictEqual({ imports: { ok: true, value: {} } });
});
Running the test
cd test
PL_ADDRESS='http://localhost:6345' pnpm test
Settings test timeouts
Test timeouts are set in milliseconds. Remember to add 000 if you think of seconds.
Timeouts could be set in three places:
- Globally, for each test in a run:
# If you run two tests executed one by one and both timeout, this will take
# 20 seconds (10 second each)
pnpm test -- --test-timeout 10000
# mind '--' between 'pnpm test' and '--test-timeout'.
# This is required as tells pnpm, that later options are for script you run, not for pnpm itself. - For particular block test:
// Custom timeout for specific block test can be set in blockTest options parameter.
blockTest('with custom timeout', { timeout: 10_000 }, async ({ rawPrj: project }) => { "..." }); - For particular step in a test:
blockTest('with custom timeout', { timeout: 10_000 }, async ({ rawPrj: project, helpers }) => {
// ...
// Custom timeout is possible for specific block state waiter.
await helpers.awaitBlockDone(blockId, 5_000);
});
Setting Block Arguments
Use project.setBlockArgs() to configure block inputs.
import { blockTest } from '@platforma-sdk/test';
import { uniquePlId } from '@platforma-sdk/model';
import { blockSpec } from 'this-block';
import type { BlockArgs } from '@platforma-open/milaboratories.samples-and-data.model'; // Use your block model package to import right arguments schema.
blockTest('simple input', async ({ rawPrj: project, expect }) => {
const blockId = await project.addBlock('Block', blockSpec);
const somethingId = uniquePlId(); // generate unique ID for something
// The arguments accepted are controlled by your block's model.
await project.setBlockArgs(blockId, {
ids: [somethingId],
labels: { [somethingId]: 'Sample 1' },
// ... other args
} satisfies BlockArgs);
// Continue with test...
});
Working with Files and Data
To test blocks that process files, use helpers.getLocalFileHandle():
const fileHandle = await helpers.getLocalFileHandle('./assets/my.fastq.gz');
File paths are relative to the test directory. The function returns a handle that can be used in block arguments:
import { blockTest } from '@platforma-sdk/test';
import { uniquePlId } from '@platforma-sdk/model';
import { blockSpec } from 'this-block';
import type { BlockArgs } from '@platforma-open/milaboratories.samples-and-data.
blockTest('use files', async ({ rawPrj: project, helpers }) => {
const blockId = await project.addBlock('Block', blockSpec);
// Generate unique sample ID
const sampleId = uniquePlId();
// Load R1 and R2 of the sample
const r1Handle = await helpers.getLocalFileHandle('./assets/small_data_R1.fastq.gz');
const r2Handle = await helpers.getLocalFileHandle('./assets/small_data_R2.fastq.gz');
await project.setBlockArgs(blockId, {
sampleIds: [sampleId],
dataset: {
[sampleId]: {
r1: r1Handle,
r2: r2Handle,
}
},
// ... other args
} satisfies BlockArgs);
// Continue with test...
await project.runBlock(blockId);
});
Working with Block Outputs
Block outputs are available in stableState.outputs.
You can use wrapOutputs() to get type-safe access to outputs values and automatic error checking:
import { blockTest } from '@platforma-sdk/test';
import { blockSpec } from 'this-block';
import { wrapOutputs } from '@platforma-sdk/model';
blockTest('use files', async ({ rawPrj: project, expect }) => {
const blockId = await project.addBlock('Block', blockSpec);
// test configuration steps: set block args and so on
const stableState = await project.getBlockState(blockId).awaitStableValue();
const outputs = wrapOutputs(stableState.outputs);
// This is type-safe and has nice suggestions from your IDE!
expect(outputs.presets!.size).toBe(0);
});
Multi-block tests
Many blocks depend on data from upstream blocks. You can easily add several blocks to your project and configure each to your needs:
import { blockTest } from '@platforma-sdk/test';
import { uniquePlId, wrapOutputs } from '@platforma-sdk/model';
import { blockSpec as samplesAndDataBlockSpec } from '@platforma-open/milaboratories.samples-and-data';
import type { BlockArgs as SamplesAndDataBlockArgs } from '@platforma-open/milaboratories.samples-and-data.model';
import { blockSpec as currentBlockSpec } from 'this-block';
blockTest('use files', async ({ rawPrj: project, helpers, expect }) => {
const sndBlockId = await project.addBlock('Block', samplesAndDataBlockSpec);
const myBlockId = await project.addBlock('Block', currentBlockSpec);
const sample1Id = uniquePlId();
const metaColumn1Id = uniquePlId();
const dataset1Id = uniquePlId();
const r1Handle = await helpers.getLocalFileHandle('./assets/small_data_R1.fastq.gz');
const r2Handle = await helpers.getLocalFileHandle('./assets/small_data_R2.fastq.gz');
// You can get insight of your desired block args using Desktop App block debug panel
project.setBlockArgs(sndBlockId, {
metadata: [
{
id: metaColumn1Id,
label: 'MetaColumn1',
global: false,
valueType: 'Long',
data: {
[sample1Id]: 2345
}
}
],
sampleIds: [sample1Id],
sampleLabelColumnLabel: 'Sample Name',
sampleLabels: { [sample1Id]: 'Sample 1' },
datasets: [
{
id: dataset1Id,
label: 'Dataset 1',
content: {
type: 'Fastq',
readIndices: ['R1', 'R2'],
gzipped: true,
data: {
[sample1Id]: {
R1: r1Handle,
R2: r2Handle
}
}
}
}
]
} satisfies SamplesAndDataBlockArgs);
await project.runBlock(sndBlockId);
await helpers.awaitBlockDone(sndBlockId, 8000);
const myBlockState = project.getBlockState(myBlockId);
const myOutputs1 = wrapOutputs((await myBlockState.awaitStableValue()).outputs!);
await project.setBlockArgs(myBlockId, {
input: myOutputs1.inputOptions[0].ref,
// ...
} satisfies BlockArgs);
// rest ops with your block
});
For details on how data flows between blocks via the Result Pool, see Using information produced by upstream blocks.