Filesystem deposit workflows
Raydocs workflows can now connect to remote filesystems using a singlefilesystem credential type.
Version 1 supports:
- S3-compatible storage
- FTP
- SFTP
The moving parts
There are two workflow building blocks:- Filesystem node: list, read, inspect, move, copy, or delete remote files
- Filesystem Deposit Trigger: poll one or more inbox folders and create one workflow run per claimed file
Supported credentials
The sharedfilesystem credential type supports three drivers:
- S3-compatible
- FTP
- SFTP
- S3-compatible uses fields such as access key, secret, region, bucket, and optional endpoint
- FTP uses host, port, username, password, SSL, passive mode, and timeout
- SFTP uses host, port, username, password or private key, optional passphrase, and timeout
- Raydocs only pre-fills non-secret fields
- existing secrets are shown as stored, but never returned in clear text
- leaving a secret field empty keeps the current stored value
- clearing a secret is an explicit action in the credential editor
- the Filesystem Deposit Trigger
- the Filesystem action node
How deposit claiming works
The trigger does not use a separateprocessing/ folder.
Instead, when a file is picked up, Raydocs renames it in place and appends the workflow run id to the filename.
Example:
- original:
inbox/invoice-123.pdf - claimed:
inbox/invoice-123.__raydocs__run_<runId>.pdf
- the original filename no longer appears as a fresh candidate on the next scan
- already-claimed files are easy for the scanner to ignore
What happens during a scan
Each scan follows the same sequence:- Raydocs loads the trigger configuration and acquires a lock for that trigger
- Raydocs scans the configured inbox folders
- It filters files using the trigger settings:
- inbox path list
- allowed extensions
- recursive or non-recursive scan
- hidden file filtering
- max files per scan
- For each candidate file, Raydocs creates a workflow run
- Raydocs renames the file in place to claim it
- Raydocs injects file metadata into the run input
- Raydocs dispatches the workflow run
- one file found
- one workflow run created
- one claimed filename written back to the remote filesystem
What the workflow receives
When a file is successfully claimed, the workflow run receives structured input. Example:What the trigger does
The trigger is intentionally lightweight. It only handles:- schedule
- scan
- filter
- claim by rename
- create workflow runs
- archive folders
- error folders
- automatic cleanup after success
- automatic move-to-error behavior
What happens when a file is found
When the trigger finds a matching file:- it creates a run
- it claims the file by renaming it
- it starts the workflow with the
deposit_filepayload
What happens when something goes wrong
There are several different error cases, and they do not all behave the same way.If the scan cannot access the filesystem
Examples:- bad credentials
- remote server unavailable
- permission denied on the root path
- no files are claimed
- no runs are started
- the trigger records an error and retries on the next scheduled scan
If a run is created but the claim rename fails
Examples:- the file was removed by another process
- the remote server rejects the rename
- permissions allow listing but not moving
- Raydocs does not keep the pending run
- the file stays untouched under its original name
- the next scan can try again naturally
If the file is claimed but the workflow cannot be started cleanly
Examples:- run enrichment fails
- dispatch fails after the rename
- move the file back to its original name
- delete the pending run
If the workflow starts and later fails during processing
This is the most important case to design for. In this case:- the file is already claimed
- the trigger will ignore it on future scans
- Raydocs will not automatically move it to
error/
Recommended cleanup pattern
Cleanup belongs to the workflow itself. Recommended approach:- on the success path, use the Filesystem node to delete the claimed file or move it to an archive folder
- on the failure path, use On Workflow Error plus the Filesystem node to move the claimed file to an
error/folder
Recommended error-handling pattern
For deposit workflows, the safest mental model is:- The trigger only finds and claims files
- The main workflow path handles success cleanup
On Workflow Errorhandles failure cleanup
- success path:
- process the claimed file
- delete it
- or move it to
archive/
- error path:
- move the claimed file to
error/<original_filename> - optionally notify someone
- move the claimed file to
Suggested failure strategy
If a workflow fails after the file has been claimed:- leave the file where it is until your error branch runs
- in
trigger.on_error, move the claimed file to something likeerror/<original_filename>
Why Raydocs does not move files to error/ automatically
This is a deliberate design choice.
Different teams want different behavior:
- some want to delete successful files
- some want to archive them
- some want to rename them
- some want to keep the original name in
error/ - some want to preserve the claimed name for traceability
Example workflow patterns
Pattern 1: Simple deposit then delete
- trigger on files in
inbox/ - read the claimed file with the Filesystem node
- process it
- delete
deposit_file.claimed_path
Pattern 2: Deposit then archive
- trigger on files in
inbox/ - process the file
- move
deposit_file.claimed_pathtoarchive/<original_filename>
Pattern 3: Deposit with explicit error folder
- trigger on files in
inbox/ - process the file in the main path
- add On Workflow Error
- in the error path, move
deposit_file.claimed_pathtoerror/<original_filename>
The Filesystem node
The Filesystem node is the action node you use after the trigger. It supports these operations:listreadexistsmetadatacopymovedelete
- read the claimed file into the workflow
- move failed files into an error folder
- archive successful files
- delete temporary files after success
move can also be used as a rename operation.
Trigger configuration
The Filesystem Deposit Trigger supports:scan_interval_minutesinbox_pathsallowed_extensionsrecursivemax_files_per_scanignore_hidden_files
10 minute scan interval unless you set a different value.
Use this trigger when you want a generic filesystem inbox, not a fully custom polling engine.
Practical recommendations
- Start with a single inbox folder and a narrow extension filter such as
pdf - Use
max_files_per_scanto avoid huge bursts on the first import - Prefer
On Workflow Errorover ad hoc local cleanup on every node - Keep operator-facing failed files in an explicit
error/folder - Use the claimed path for technical cleanup and the original filename for human-facing naming
