Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
JFrog CLI v2 was launched in July 2021. It includes changes to the functionality and usage of some of the legacy JFrog CLI commands. The changes are the result of feedback we received from users over time through GitHub, making the usage and functionality easier and more intuitive. For example, some of the default values changed, and are now more consistent across different commands. We also took this opportunity for improving and restructuring the code, as well as replacing old and deprecated functionality.
Most of the changes included in v2 are breaking changes compared to the v1 releases. We therefore packaged and released these changes under JFrog CLI v2, allowing users to migrate to v2 only when they are ready.
New enhancements to JFrog CLI are planned to be introduced as part of V2 only. V1 receives very little development attention nowadays. We therefore encourage users who haven't yet migrated to V2, to do so.
The default value of the --flat option is now set to false for the jfrog rt upload command.
The deprecated syntax of the jfrog rt mvn command is no longer supported. To use the new syntax, the project needs to be first configured using the jfrog rt mvnc command.
The deprecated syntax of the jfrog rt gradle command is no longer supported. To use the new syntax, the project needs to be first configured using the jfrog rt gradlec command.
The deprecated syntax of the jfrog rt npm and jfrog rt npm-ci commands is no longer supported. To use the new syntax, the project needs to be first configured using the jfrog rt npmc command.
The deprecated syntax of the jfrog rt go command is no longer supported. To use the new syntax, the project needs to be first configured using the jfrog rt go-config command.
The deprecated syntax of the jfrog rt nuget command is no longer supported. To use the new syntax, the project needs to be first configured using the jfrog rt nugetc command.
All Bintray commands are removed.
The jfrog rt config command is removed and replaced by the jfrog config add command.
The jfrog rt use command is removed and replaced with the jfrog config use.
The --props command option and props file spec property for the jfrog rt upload command are removed, and replaced with the --target-props command option and targetProps file spec property respectively.
The following commands are removed
and replaced with the following commands respectively
The jfrog rt go-publish command now only supports Artifactory version 6.10.0 and above. Also, the command no longer accepts the target repository as an argument. The target repository should be pre-configured using the jfrog rt go-config command.
The jfrog rt go command no longer falls back to the VCS when dependencies are not found in Artifactory.
The --deps, --publish-deps, --no-registry and --self options of the jfrog rt go-publish command are now removed.
The --apiKey option is now removed. The API key should now be passed as the value of the --password option.
The --exclude-patterns option is now removed, and replaced with the --exclusions option. The same is true for the excludePatterns file spec property, which is replaced with the exclusions property.
The JFROG_CLI_JCENTER_REMOTE_SERVER and JFROG_CLI_JCENTER_REMOTE_REPO environment variables are now removed and replaced with the JFROG_CLI_EXTRACTORS_REMOTE environment variable.
The JFROG_CLI_HOME environment variable is now removed and replaced with the JFROG_CLI_HOME_DIR environment variable.
The JFROG_CLI_OFFER_CONFIG environment variable is now removed and replaced with the CI environment variable. Setting CI to true disables all prompts.
The directory structure is now changed when the jfrog rt download command is used with placeholders and --flat=false (--flat=false is now the default). When placeholders are used, the value of the --flat option is ignored.
When the jfrog rt upload command now uploads symlinks to Artifactory, the target file referenced by the symlink is uploaded to Artifactory with the symlink name. If the --symlink options is used, the symlink itself (not the referenced file) is uploaded, with the referenced file as a property attached to the file.
To download the executable, please visit the JFrog CLI Download Site.
You can also download the sources from the JFrog CLI Project on GitHub where you will also find instructions on how to build JFrog CLI.
The legacy name of JFrog CLI's executable is jfrog. In an effort to make the CLI usage easier and more convenient, we recently exposed a series of new installers, which install JFrog CLI with the new jf executable name. For backward compatibility, the old installers will remain available. We recommend however migrating to the newer jf executable name.
The following installers are available for JFrog CLI v2. These installers make JFrog CLI available through the jf executable.
The following installers are available for JFrog CLI v2. These installers make JFrog CLI available through the jfrog executable.
The following installations are available for JFrog CLI v1. These installers make JFrog CLI available through the jfrog executable.
If you're using JFrog CLI from a bash, zsh, or fish shells, you can install JFrog CLI's auto-completion scripts.
If you're installing JFrog CLI using Homebrew, the bash, zsh, or fish auto-complete scripts are automatically installed by Homebrew. Please make sure that your .bash_profile
or .zshrc
are configured as described in the Homebrew Shell Completion documentation.
With your favorite text editor, open $HOME/.zshrc
and add jfrog to the plugin list. For example:
To install auto-completion for bash, run the following command and follow the instructions to complete the installation:
To install auto-completion for zsh, run the following command and follow the instructions to complete the installation:
To install auto-completion for fish, run the following command:
When used with Xray, JFrog CLI offers several means of authentication: JFrog CLI does not support accessing Xray without authentication.
To authenticate yourself using your Xray login credentials, either configure your credentials once using the jf c add command or provide the following option to each command.
--url
JFrog Xray API endpoint URL. It usually ends with /xray
--user
JFrog username
--password
JFrog password
To authenticate yourself using an Xray Access Token, either configure your Access Token once using the jf c add command or provide the following option to each command.
--url
JFrog Xray API endpoint URL. It usually ends with /xray
--access-token
JFrog access token
When used with Artifactory, JFrog CLI offers several means of authentication: JFrog CLI does not support accessing Artifactory without authentication.
To authenticate yourself using your JFrog login credentials, either configure your credentials once using the jf c add command or provide the following option to each command.
--url
JFrog Artifactory API endpoint URL. It usually ends with /artifactory
--user
JFrog username
--password
JFrog password or API key
For enhanced security, when JFrog CLI is configured to use a username and password / API key, it automatically generates an access token to authenticate with Artifactory. The generated access token is valid for one hour only. JFrog CLI automatically refreshed the token before it expires. The jf c add command allows disabling this functionality. This feature is currently not supported by commands which use external tools or package managers or work with JFrog Distribution.
To authenticate yourself using an Artifactory Access Token, either configure your Access Token once using the jf c add command or provide the following option to each command.
--url
JFrog Artifactory API endpoint URL. It usually ends with /artifactory
--access-token
JFrog access token
Note
Currently, authentication with RSA keys is not supported when working with external package managers and build tools (Maven, Gradle, Npm, Docker, Go and NuGet) or with the cUrl integration.
From version 4.4, Artifactory supports SSH authentication using RSA public and private keys. To authenticate yourself to Artifactory using RSA keys, execute the following instructions:
Enable SSH authentication as described in Configuring SSH.
Configure your Artifactory URL to have the following format: ssh://[host]:[port]
There are two ways to do this:
For each command, use the --url
command option.
Specify the Artifactory URL in the correct format using the jf c add command.
Warning Don't include your Artifactory context URL
Make sure that the [host] component of the URL only includes the hostname or the IP, but not your Artifactory context URL.
Configure the path to your SSH key file. There are two ways to do this:
For each command, use the --ssh-key-path
command option.
Specify the path using the jf c add command.
From Artifactory release 7.38.4, you can authenticate users using a client certificate (mTLS). To do so will require a reverse proxy and some setup on the front reverse proxy (Nginx). Read about how to set this up here.
To authenticate with the proxy using a client certificate, either configure your certificate once using the jf c add command or use the --client-cert-path
and--client-cert-ket-path
command options with each command.
Note
Authentication using client certificates (mTLS) is not supported by commands which integrate with package managers.
Not Using a Public CA (Certificate Authority)?
This section is relevant for you if you're not using a public CA (Certificate Authority) to issue the SSL certificate used to connect to your Artifactory domain. You may not be using a public CA either because you're using self-signed certificates or you're running your own PKI services in-house (often by using a Microsoft CA).
In this case, you'll need to make those certificates available for JFrog CLI, by placing them inside the security/certs directory, which is under JFrog CLI's home directory. By default, the home directory is ~/.jfrog, but it can be also set using the JFROG_CLI_HOME_DIR environment variable.
Note
The supported certificate format is PEM. Make sure to have ONE file with the ending .pem. OR provide as many as you want and run the c_rehash command on the folder as follows :c_rehash ~/.jfrog/security/certs/
.
Some commands support the --insecure-tls option, which skips the TLS certificates verification.
Before version 1.37.0, JFrog CLI expected the certificates to be located directly under the security directory. JFrog CLI will automatically move the certificates to the new directory when installing version 1.37.0 or above. Downgrading back to an older version requires replacing the configuration directory manually. You'll find a backup if the old configuration under .jfrog/backup
JFrog CLI is a compact and smart client that provides a simple interface that automates access to JFrog products simplifying your automation scripts and making them more readable and easier to maintain. JFrog CLI works with JFrog Artifactory, making your scripts more efficient and reliable in several ways:
Advanced upload and download capabilities
JFrog CLI allows you to upload and download artifacts concurrently by a configurable number of threads that help your automated builds run faster. For big artifacts, you can define a number of chunks to split files for parallel download.
JFrog CLI optimizes both upload and download operations by skipping artifacts that already exist in their target location. Before uploading an artifact, JFrog CLI queries Artifactory with the artifact's checksum. If it already exists in Artifactory's storage, the CLI skips sending the file, and if necessary, Artifactory only updates its database to reflect the artifact upload. Similarly, when downloading an artifact from Artifactory, if the artifact already exists in the same download path, it will be skipped. With checksum optimization, long upload and download operations can be paused in the middle, and then be continued later where they were left off.
JFrog CLI supports uploading files to Artifactory using wildcard patterns, regular expressions, and ANT patterns, giving you an easy way to collect all the files you wish to upload. You can also download files using wildcard patterns.
Support for popular package managers and build tools
JFrog CLI offers comprehensive support for popular package managers and builds tools. It seamlessly integrates with package managers like npm, Maven, NuGet, Docker, and more, allowing you to easily manage and publish packages.
Support for Build-Info
Build-Info is a comprehensive metadata Software Bill of Materials (SBOM) that captures detailed information about the components used in a build. It serves as a vital source of information, containing version history, artifacts, project modules, dependencies, and other crucial data collected during the build process. By storing this metadata in Artifactory, developers gain traceability and analysis capabilities to improve the quality and security of their builds. The Build-Info encompasses project module details, artifacts, dependencies, environment variables, and more. It is collected and outputted in a JSON format, facilitating easy access to information about the build and its components. JFrog CLI can create build-info and store the build-info in Artifactory.
Read more about JFrog CLI here.
This command can be used to verify that Artifactory is accessible by sending an applicative ping to Artifactory.
Command name
rt ping
Abbreviation
rt p
Command options:
--url
[Optional] JFrog Artifactory URL. (example: https://acme.jfrog.io/artifactory)
--server-id
[Optional] Server ID configured using the jf c add command. If not specified, the default configured Artifactory server is used.
--insecure-tls
[Default: false] Set to true to skip TLS certificates verification.
Command arguments:
The command accepts no arguments.
Ping the configured default Artifactory server.
Ping the configured Artifactory server with ID rt-server-1.
Ping the Artifactory server. accessible through the specified URL.
To achieve complex file manipulations you may require several CLI commands. For example, you may need to upload several different sets of files to different repositories. To simplify the implementation of these complex manipulations, you can apply JFrog CLI download, upload, move, copy and delete commands with JFrog Artifactory using --spec option to replace the inline command arguments and options. Similarly, you can create and update release bundles by providing the --spec
command option. Each command uses an array of file specifications in JSON format with a corresponding schema as described in the sections below. Note that if any of these commands are issued using both inline options and the file specs, then the inline options override their counterparts specified in the file specs.
The file spec schema for the copy and move commands is as follows:
The file spec schema for the download command is as follows:
The file spec schema for the create and update release bundle v1 commands is as follows:
The file spec schema for the upload command is as follows:
The file spec schema for the search and delete commands are as follows:
The following examples can help you get started using File Specs.
Download all files located under the all-my-frogs directory in the my-local-repo repository to the froggy directory.
Download all files located under the all-my-frogs directory in the my-local-repo repository to the froggy directory. Download only files which are artifacts of build number 5 of build my-build .
Download all files retrieved by the AQL query to the froggy directory.
All zip files located under the resources directory to the zip folder, under the all-my-frogs repository. AND
All TGZ files located under the resources directory to the **tgz folder, under the all-my-frogs repository.
Tag all zip files with type = zip and status = ready.
Tag all tgz files with type = tgz and status = ready.
Upload all zip files located under the resources directory to the zip folder, under the all-my-frogs repository.
Package all files located (including subdirectories) under the resources directory into a zip archive named archive.zip , and upload it into the root of the all-my-frogs repository.
Download all files located under the all-my-frogs directory in the my-local-repo repository except for files with .txt extension and all files inside the all-my-frogs directory with the props. prefix.`
Notice that the exclude patterns do not include the repository.
Download The latest file uploaded to the all-my-frogs directory in the my-local-repo repository.
Search for the three largest files located under the all-my-frogs directory in the my-local-repo repository. If there are files with the same size, sort them "internally" by creation date.
Download The second-latest file uploaded to the all-my-frogs directory in the my-local-repo repository.
This example shows how to delete artifacts in artifactory under specified path based on how old they are.
The following File Spec finds all the folders which match the following criteria:
They are under the my-repo repository.
They are inside a folder with a name that matches abc-*-xyz and is located at the root of the repository.
Their name matches ver*
They were created more than 7 days ago.
This example uses Using Placeholders. For each .tgz file in the source directory, create a corresponding directory with the same name in the target repository and upload it there. For example, a file named froggy.tgz should be uploaded to my-local-rep/froggy. (froggy will be created a folder in Artifactory).
This examples uses Using Placeholders. Upload all files whose name begins with "frog" to folder frogfiles in the target repository, but append its name with the text "-up". For example, a file called froggy.tgz should be renamed froggy.tgz-up.
The following two examples lead to the exact same outcome. The first one uses Using Placeholders, while the second one does not. Both examples download all files from the generic-local repository to be under the ֿֿmy/local/path/ local file-system path, while maintaining the original Artifactory folder hierarchy. Notice the different flat values in the two examples.
This example creates a release bundle v1 and applies "pathMapping" to the artifact paths after distributing the release bundle v1.
All occurrences of the "a1.in" file are fetched and mapped to the "froggy" repository at the edges.
Fetch all artifacts retrieved by the AQL query.
Create the release bundle v1 with the artifacts and apply the path mappings at the edges after distribution.
The "pathMapping" option is provided, allowing users to control the destination of the release bundle artifacts at the edges.
To learn more, visit the Create Release Bundle v1 Version documentation.
JSON schemas allow you to annotate and validate JSON files. The JFrog File Spec schema is available in the JSON Schema Store catalog and in the following link: https://github.com/jfrog/jfrog-cli/blob/v2/schema/filespec-schema.json.
The File Spec schema is automatically applied to the following file patterns:
**/filespecs/*.json
*filespec*.json
*.filespec
To apply the File Spec schema validation, install the JFrog VS-Code extension.
Alternatively, copy the following to your settings.json file:
settings.json
The JFrog CLI offers enormous flexibility in how you download, upload, copy, or move files through the use of wildcard or regular expressions with placeholders.
Any wildcard enclosed in parentheses in the source path can be matched with a corresponding placeholder in the target path to determine the name of the artifact once uploaded.
For each .tgz file in the source directory, create a corresponding directory with the same name in the target repository and upload it there. For example, a file named froggy.tgz should be uploaded to my-local-rep/froggy. froggy will be created in a folder in Artifactory).
Upload all files whose name begins with "frog" to folder frogfiles in the target repository, but append its name with the text "-up". For example, a file called froggy.tgz should be renamed froggy.tgz-up.
Upload all files in the current directory to the my-local-repo repository and place them in directories that match their file extensions.
Copy all zip files under /rabbit in the source-frog-repo repository into the same path in the target-frog-repo repository and append the copied files' names with ".cp".
JFrog CLI lets you upload and download artifacts from your local file system to Artifactory, this also includes uploading symlinks (soft links).
Symlinks are stored in Artifactory as files with a zero size, with the following properties: symlink.dest - The actual path on the original filesystem to which the symlink points symlink.destsha1 - the SHA1 checksum of the value in the symlink.dest property
To upload symlinks, the jf rt upload
command should be executed with the --symlinks
option set to true.
When downloading symlinks stored in Artifactory, the CLI can verify that the file to which the symlink points actually exists and that it has the correct SHA1 checksum. To add this validation, you should use the --validate-symlinks
option with the jf rt download
command.
Execute a cURL command, using the configured Artifactory details. The command expects the cUrl client to be included in the PATH.
Note - This command supports only Artifactory REST APIs, which are accessible under https://<JFrog base URL>/artifactory/api/
Command name
rt curl
Abbreviation
rt cl
Command options:
--server-id
[Optional] Server ID configured using the jf c add command. If not specified, the default configured server is used.
Command arguments:
cUrl arguments and flags
The same list of arguments and flags passed to cUrl, except for the following changes: 1. The full Artifactory URL should not be passed. Instead, the REST endpoint URI should be sent. 2. The login credentials should not be passed. Instead, the --server-id should be used.
Currently only servers configured with username and password / API key are supported.
Execute the cUrl client, to send a GET request to the /api/build endpoint to the default Artifactory server
Execute the cUrl client, to send a GET request to the /api/build endpoint to the configured my-rt-server server ID.
JFrog CLI offers a set of commands for managing Artifactory configuration entities.
This command allows creating a bulk of users. The details of the users are provided in a CSV format file. Here's the file format.
Note: The first line in the CSV is cells' headers. It is mandatory and is used by the command to map the cell value to the users' details.
The CSV can include additional columns, with different headers, which will be ignored by the command.
Command-name
rt users-create
Abbreviation
rt uc
Command options:
--server-id
[Optional] Artifactory Server ID configured using the 'jf config' command.
--csv
[Mandatory] Path to a CSV file with the users' details. The first row of the file should include the name,password,email headers.
--replace
[Optional] Set to true if you'd like existing users or groups to be replaced.
--users-groups
[Optional] A list of comma-separated(,) groups for the new users to be associated to.
Command arguments:
The command accepts no arguments
Create new users according to details defined in the path/to/users.csv file.
This command allows deleting a bulk of users. The command a list of usernames to delete. The list can be either provided as a comma-seperated argument, or as a CSV file, which includes one column with the usernames. Here's the CSV format.
The first line in the CSV is cells' headers. It is mandatory and is used by the command to map the cell value to the users' details.
The CSV can include additional columns, with different headers, which will be ignored by the command.
Command-name
rt users-delete
Abbreviation
rt udel
Command options:
--server-id
[Optional] Artifactory Server ID configured using the 'jf config' command.
--csv
[Optional] Path to a csv file with the usernames to delete. The first row of the file is the reserved for the cells' headers. It must include the "username" header.
Command arguments:
users list
comma-separated(,) list of usernames to delete. If the --csv command option is used, then this argument becomes optional.
Delete the users according to the usernames defined in the path/to/users.csv file.
Delete the users according to the u1, u2 and u3 usernames.
This command creates a new users group.
Command-name
rt group-create
Abbreviation
rt gc
Command options:
--server-id
[Optional] Artifactory Server ID configured using the 'jf config' command.
Command arguments:
group name
The name of the group to create.
Create a new group name reviewers .
This command adds a list fo existing users to a group.
Command-name
rt group-add-users
Abbreviation
rt gau
Command options:
--server-id
[Optional] Artifactory Server ID configured using the 'jf config' command.
Command arguments:
group name
The name of the group to add users to.
users list
Comma-seperated list of usernames to add to the specified group.
Add to group reviewers the users with the following usernames: u1, u2 and u3.
This command deletes a group.
Command-name
rt group-delete
Abbreviation
rt gdel
Command options:
--server-id
[Optional] Artifactory Server ID configured using the 'jf config' command.
Command arguments:
group name
The name of the group to delete.
Delete the reviewers group.
JFrog CLI offers a set of commands for managing Artifactory repositories. You can create, update and delete repositories. To make it easier to manage repositories, the commands which create and update the repositories accept a pre-defined configuration template file. This template file can also include variables, which can be later replaced with values, when creating or updating the repositories. The configuration template file is created using the jf rt repo-template command.
This is an interactive command, which creates a configuration template file. This file should be used as an argument for the jf rt repo-create or the jf rt repo-update commands.
When using this command to create the template, you can also provide replaceable variable, instead of fixes values. Then, when the template is used to create or update repositories, values can be provided to replace the variables in the template.
Command-name
rt repo-template
Abbreviation
rt rpt
Command options:
The command has no options.
Command arguments:
template path
Specifies the local file system path for the template file created by the command. The file should not exist.
Create a configuration template, with a variable for the repository name. Then, create a repository using this template, and provide repository name to replace the variable.
These two commands create a new repository and updates an existing a repository. Both commands accept as an argument a configuration template, which can be created by the jf rt repo-template command. The template also supports variables, which can be replaced with values, provided when it is used.
Command-name
rt repo-create / rt repo-update
Abbreviation
rt rc / rt ru
Command options:
--server-id
[Optional] Artifactory Server ID configured using the 'jf config' command.
--vars
[Optional] List of semicolon-separated(;) variables in the form of "key1=value1;key2=value2;..." to be replaced in the template. In the template, the variables should be used as follows: ${key1}.
Command arguments:
template path
Specifies the local file system path for the template file to be used for the repository creation. The template can be created using the "jf rt rpt" command.
Example 1
Create a repository, using the template.json file previously generated by the repo-template command.
Example 2
Update a repository, using the template.json file previously generated by the repo-template command.
Example 3
Update a repository, using the template.json file previously generated by the repo-template command. Replace the repo-name variable inside the template with a name for the updated repository.
This command permanently deletes a repository, including all of its content.
Command name
rt repo-delete
Abbreviation
rt rdel
Command options:
--server-id
[Optional] Artifactory Server ID configured using the 'jf config' command.
--quiet
[Default: $CI] Set to true to skip the delete confirmation message.
Command arguments:
repository key
Specifies the repositories that should be removed. You can use wildcards to specify multiple repositories.
Delete a repository from Artifactory.
JFrog CLI offers commands creating and deleting replication jobs in Artifactory. To make it easier to create replication jobs, the commands which creates the replication job accepts a pre-defined configuration template file. This template file can also include variables, which can be later replaced with values, when creating the replication job. The configuration template file is created using the jf rt replication-template command.
This command creates a configuration template file, which will be used as an argument for the jf rt replication-create command.
When using this command to create the template, you can also provide replaceable variable, instead of fixes values. Then, when the template is used to create replication jobs, values can be provided to replace the variables in the template.
Command-name
rt replication-template
Abbreviation
rt rplt
Command options:
The command has no options.
Command arguments:
template path
Specifies the local file system path for the template file created by the command. The file should not exist.
Create a configuration template, with two variables for the source and target repositories. Then, create a replication job using this template, and provide source and target repository names to replace the variables.
This command creates a new replication job for a repository. The command accepts as an argument a configuration template, which can be created by the jf rt replication-template command. The template also supports variables, which can be replaced with values, provided when it is used.
Command-name
replication-create
Abbreviation
rt rplc
Command options:
--server-id
[Optional] Artifactory Server ID configured using the 'jf config' command.
--vars
[Optional] List of semicolon-separated(;) variables in the form of "key1=value1;key2=value2;..." to be replaced in the template. In the template, the variables should be used as follows: ${key1}.
Command arguments:
template path
Specifies the local file system path for the template file to be used for the replication job creation. The template can be created using the "jf rt rplt" command.
Example 1
Create a replication job, using the template.json file previously generated by the replication-template command.
Example 2
Update a replication job, using the template.json file previously generated by the replication-template command. Replace the source and target variables inside the template with the names of the replication source and target repositories.
This command permanently deletes a replication jobs from a repository.
Command name
rt replication-delete
Abbreviation
rt rpldel
Command options:
--server-id
[Optional] Artifactory Server ID configured using the 'jf config' command.
--quiet
[Default: $CI] Set to true to skip the delete confirmation message.
Command arguments:
repository key
The repository from which the replications will be deleted.
Delete a repository from Artifactory.
JFrog CLI offers commands creating, updating and deleting permission targets in Artifactory. To make it easier to create and update permission targets, the commands which create and update the permission targets accept a pre-defined configuration template file. This template file can also include variables, which can be later replaced with values, when creating or updating the permission target. The configuration template file is created using the jf rt permission-target-template command.
This command creates a configuration template file, which will be used as an argument for the jf rt permission-target-create and jf rt permission-target-update commands.
Command-name
rt permission-target-template
Abbreviation
rt ptt
Command options:
The command has no options.
Command arguments:
template path
Specifies the local file system path for the template file created by the command. The file should not exist.
These commands create/update a permission target. The commands accept as an argument a configuration template, which should be created by the jf rt permission-target-template command beforehand. The template also supports variables, which can be replaced with values, provided when it is used.
Command-name
permission-target-create / permission-target-update
Abbreviation
rt ptc / rt ptu
Command arguments:
template path
Specifies the local file system path for the template file to be used for the permission target creation or update. The template should be created using the "jf rt ptt" command.
Command-name
permission-target-create / permission-target-update
Abbreviation
rt ptc / rt ptu
Command options:
--server-id
[Optional] Artifactory Server ID configured using the 'jf config' command.
--vars
[Optional] List of semicolon-separated(;) variables in the form of "key1=value1;key2=value2;..." to be replaced in the template. In the template, the variables should be used as follows: ${key1}.
This command permanently deletes a permission target.
Command name
rt permission-target-delete
Abbreviation
rt ptdel
Command options:
--server-id
[Optional] Artifactory Server ID configured using the 'jf config' command.
--quiet
[Default: $CI] Set to true to skip the delete confirmation message.
Command arguments:
permission target name
The permission target that should be removed.
This page describes how to use the JFrog CLI to create external evidence files, which are then deployed to Artifactory. You can create evidence for:
Artifacts
Packages
Builds
Release Bundles v2
Note
The Evidence service requires Artifactory 7.104.2 or above.
The ability for users to attach external evidence to Artifactory, as described here, requires an Enterprise+ subscription.
The ability to collect internal evidence generated by Artifactory requires a Pro subscription or above. Internal evidence generated by Xray requires a Pro X subscription or above.
In the current release, an evidence file can be signed with one key only.
The maximum size evidence file supported by Artifactory is 16MB.
For more information about the API used for deploying evidence to Artifactory, see Deploy Evidence.
To deploy external evidence, use an access token or the web login mechanism for authentication. Basic authentication (username/password) is not supported.
JFrog CLI uses the following syntax for creating evidence:
Artifact Evidence
Package Evidence
Build Evidence
Release Bundle v2 Evidence
--predicate
file-path
Mandatory field.
Defines the path to a locally-stored, arbitrary json file that contains the predicates.
--predicate-type
predicate-type-uri
Mandatory field.
The type of predicate defined by the json file. Sample predicate type uris include:
--key
local-private-key-path
Optional path for a private key (see Tip below). Supported key types include:
Tip
You can define the key using the
JFROG_CLI_SIGNING_KEY
environment variable as an alternative to using the--key
command parameter. If the environment variable is not defined, the--key
command is mandatory.
Note
Two key formats are supported: PEM and SSH
--key-alias
RSA-1024
Optional case-sensitive name for the public key created from the private key. The public key is used to verify the DSSE envelope that contains the evidence.
If the key-alias
is included, DSSE verification will fail if the same key-name
is not found in Artifactory.
If the key-alias
is not included, DSSE verification with the public key is not performed during creation.
Tip
You can define a key alias using the
JFROG_CLI_KEY_ALIAS
environment variable as an alternative to using the--key-alias
command parameter.
Note
In the unlikely event the public key is deleted from Artifactory, it may take up to 4 hours for the Evidence service to clear the key from the cache. Evidence can still be signed with the deleted key during this time.
--markdown
md file
Optional path to a file that contains evidence formatted in markdown.
--subject-repo-path
target-path
Mandatory field.
Each evidence file must have a single subject only and must include the path. Artifacts located in local repositories aggregated inside virtual repositories are supported (evidence is added to the local path).
--subject-sha256
digest
Optional digest (sha256) of the artifact.
If a digest is provided, it is verified against the subject's sha256 as it appears in Artifactory.
If a digest is not provided, the sha256 is taken from the path in Artifactory.
--package-name
name
Mandatory field.
--package-version
version-number
Mandatory field.
--package-repo-key
repo-name
Mandatory field.
--build-name
name
Mandatory field unless environment variables are used (see tip below).
--build-number
version-number
Mandatory field unless environment variables are used (see tip below).
Tip
You can use the
FROG_CLI_BUILD_NAME
andFROG_CLI_BUILD_NUMBER
environment variables as an alternative to the build command parameters.
--release-bundle
name
Mandatory field.
--release-bundle-version
version-number
Mandatory field.
Note
When DSSE verification is successful, the following message is displayed:
When DSSE verification is unsuccessful, the following message is displayed:
Artifact Evidence Sample
In the sample above, the command creates a signed evidence file with a predicate type of SLSA provenance for an artifact named file.txt.
Package Evidence Sample
Build Evidence Sample
Release Bundle v2 Evidence Sample
When used with Xray, JFrog CLI offers several means of authentication: JFrog CLI does not support accessing Xray without authentication.
To authenticate yourself using your Xray login credentials, either configure your credentials once using the_jf c add_command or provide the following option to each command.
Command option
Description
--url
JFrog Xray API endpoint URL. It usually ends with /xray
--user
JFrog username
--password
JFrog password
To authenticate yourself using an Xray Access Token, either configure your Access Token once using the _jf c add_command or provide the following option to each command.
Command option
Description
--url
JFrog Xray API endpoint URL. It usually ends with /xray
--access-token
JFrog access token
To use the CLI, install it on your local machine, or download its executable, place it anywhere in your file system and add its location to your PATH
environment variable.
Environment Variables
The jf options command displays all the supported environment variables.
JFrog CLI makes use of the following environment variables:
JFrog CLI supports using an HTTP/S proxy. All you need to do is set HTTP_PROXY or HTTPS_PROXY environment variable with the proxy URL.
HTTP_PROXY, HTTPS_PROXY and NO_PROXY are the industry standards for proxy usages.
You can use the jf login
command to authenticate with the JFrog Platform through the web browser. This command is solely interactive, meaning it does not receive any options and cannot be used in a CI server.
This command allows creating Access Tokens for users in the JFrog Platform. By default, a user-scoped token will be created. Administrators may provide the scope explicitly with '--scope', or implicitly with '--groups', '--grant-admin'.
Command name
access-token-create
Abbreviation
atc
Command arguments:
username
The username for which this token is created. If not specified, the token will be created for the current user.
Command options:
--audience
[Optional]
A space-separated list of the other instances or services that should accept this token identified by their Service-IDs.
--description
[Optional]
Free text token description. Useful for filtering and managing tokens. Limited to 1024 characters.
--expiry
[Optional]
The amount of time, in seconds, it would take for the token to expire. Must be non-negative. If not provided, the platform default will be used. To specify a token that never expires, set to zero. Non-admin may only set a value that is equal or lower than the platform default that was set by an administrator (1 year by default).
--grant-admin
[Default: false]
Set to true to provide admin privileges to the access token. This is only available for administrators.
--groups
[Optional]
A list of comma-separated(,) groups for the access token to be associated with. This is only available for administrators.
--project
[Optional]
The project for which this token is created. Enter the project name on which you want to apply this token.
--reference
[Default: false]
Generate a Reference Token (alias to Access Token) in addition to the full token (available from Artifactory 7.38.10).
--refreshable
[Default: false]
Set to true if you'd like the token to be refreshable. A refresh token will also be returned in order to be used to generate a new token once it expires.
--scope
[Optional]
The scope of access that the token provides. This is only available for administrators.
Create an access token for the user in the default server configured by the jf c add command:
Create an access token for the user with the toad username:
The config add and config edit commands are used to add and edit JFrog Platform server configuration, stored in JFrog CLI's configuration storage. These configured servers can be used by the other commands. The configured servers' details can be overridden per command by passing in alternative values for the URL and login credentials. The values configured are saved in file under the JFrog CLI home directory.
Command Name
config add / config edit
Abbreviation
c add / c edit
Command options:
--access-token
[Optional]
Access token.
--artifactory-url
[Optional]
JFrog Artifactory URL. (example: https://acme.jfrog.io/artifactory)
--basic-auth-only
[Default: false]
Used for Artifactory authentication. Set to true to disable replacing username and password/API key with automatically created access token that's refreshed hourly. Username and password/API key will still be used with commands which use external tools or the JFrog Distribution service. Can only be passed along with username and password/API key options.
--client-cert-key-path
[Optional]
Private key file for the client certificate in PEM format.
--client-cert-path
[Optional]
Client certificate file in PEM format.
--dist-url
[Optional]
Distribution URL. (example: https://acme.jfrog.io/distribution)
--enc-password
--insecure-tls
[Default: false]
Set to true to skip TLS certificates verification, while encrypting the Artifactory password during the config process.
--interactive
[Default: true, unless $CI is true]
Set to false if you do not want the config command to be interactive.
--mission-control-url
[Optional]
JFrog Mission Control URL. (example: https://acme.jfrog.io/ms)
--password
[Optional]
JFrog Platform password.
--ssh-key-path
[Optional]
For authentication with Artifactory. SSH key file path.
--url
[Optional]
JFrog Platform URL. (example: https://acme.jfrog.io)
--user
[Optional]
JFrog Platform username.
--xray-url
[Optional] Xray URL. (example: https://acme.jfrog.io/xray)
--overwrite
[Available for config add only] [Default: false] Overwrites the instance configuration if an instance with the same ID already exists.
Command arguments:
server ID
A unique ID for the server configuration.
The config remove command is used to remove JFrog Platform server configuration, stored in JFrog CLI's configuration storage.
Command name
config remove
Abbreviation
c rm
Command options:
--quiet
[Default: $CI]
Set to true to skip the delete confirmation message.
Command arguments:
server ID
The server ID to remove. If no argument is sent, all configured servers are removed.
The config show command shows the stored configuration. You may show a specific server's configuration by sending its ID as an argument to the command.
Command name
config show
Abbreviation
c s
Command arguments:
server ID
The ID of the server to show. If no argument is sent, all configured servers are shown.
The config use command sets a configured server as default. The following commands will use this server.
Command name
config use
Command arguments:
server ID
The ID of the server to set as default.
The config export command generates a token, which stores the server configuration. This token can be used by the config import command, to import the configuration stored in the token, and save it in JFrog CLI's configuration storage.
Command name
config export
Abbreviation
c ex
Command arguments:
server ID
The ID of the server to export
Command name
config import
Abbreviation
c im
Command arguments:
server token
The token to import
Starting from version 1.37.0, JFrog CLI introduces support for encrypting sensitive data stored in its configuration using an encryption key stored in a file. Follow these steps to enable encryption:
Generate a random 32-character master key. Ensure that the key size is exactly 32 characters. For example: f84hc22dQfhe9f8ydFwfsdn48!wejh8A
Create a file named security.yaml under ~/.jfrog/security.
If you've customized the default JFrog CLI home directory by setting the JFROG_CLI_HOME_DIR environment variable, create the security/security.yaml file under the configured home directory.
Add the generated master key to the security.yaml file:
Ensure that the security.yaml file has only read permissions for the user running JFrog CLI.
The configuration will be encrypted the next time JFrog CLI accesses the config. If you have existing configurations stored before creating the file, you'll need to reconfigure the servers stored in the config.
Warning: When upgrading JFrog CLI from a version prior to 1.37.0 to version 1.37.0 or above, automatic changes are made to the content of the ~/.jfrog directory to support the new functionality introduced. Before making these changes, the content of the ~/.jfrog directory is backed up inside the ~/.jfrog/backup directory. After enabling sensitive data encryption, it is recommended to remove the backup directory to ensure no sensitive data is left unencrypted.
Starting from version 2.36.0, JFrog CLI also supports encrypting sensitive data in its configuration using an encryption key stored in an environment variable. To enable encryption, follow these steps:
Generate a random 32-character master key. Ensure that the key size is exactly 32 characters. For example: f84hc22dQfhe9f8ydFwfsdn48!wejh8A
Store the key in an environment variable named JFROG_CLI_ENCRYPTION_KEY.
The configuration will be encrypted the next time JFrog CLI attempts to access the config. If you have configurations already stored before setting the environment variable, you'll need to reconfigure the servers stored in the config.
ONLY ACTIVE JFROG CUSTOMERS ARE AUTHORIZED TO USE THE JFROG AI ASSISTANT. ALL OTHER USES ARE PROHIBITED.
The JFrog CLI AI Command Assistant streamlines your workflow by turning natural language inputs into JFrog CLI commands.
Simply describe your desired actions, and the assistant generates commands with all necessary parameters, whether you're uploading artifacts, managing repositories, scanning your code, or performing other actions using the JFrog CLI.
Each query is treated individually, and while the interface allows you to refine requests, it doesn’t maintain a chat history.
This tool helps users access the full power of JFrog CLI without needing to remember specific syntax, ensuring efficiency and accuracy.
Note, This is the first version of JFrog CLI AI, hence it is limited only to Artifactory and Xray commands.
To use the JFrog CLI AI Command Assistant, follow these simple steps:
Ensure that you are in a terminal session where JFrog CLI is installed and configured.
This feature is available starting from CLI version 2.69 and above. To validate your version, run:
jf --version
Type the following command to initiate the AI assistant:
jf how
After entering the command, you will see a prompt:
Your request:
Describe in natural language what you would like the JFrog CLI to do. The AI assistant will generate the exact CLI command needed.
For example, you might type:
Your request: How to upload all files in the 'build' directory to the 'my-repo' repository?
The AI assistant will process your request and output the corresponding JFrog CLI command, including all necessary parameters. For the example above, it will generate:
jf rt u build/ my-repo/
You can now copy the generated command and run it in your terminal.
If needed, you can refine your request and try again.
Some of the Artifactory commands make use of the following environment variable:
Note
This jf scan command scans files on the local file system with Xray.
Note
This command requires:
Version 3.29.0 or above of Xray
Version 2.1.0 or above of JFrog CLI
Scans all the files located at the path/ti/files/ file-system directory using the watch1 watch defined in Xray.
Scans all the files located at the path/ti/files/ file-system directory using the watch1 and watch2 Watches defined in Xray.
Scans all the zip files located at the path/ti/files/ file-system directory using the watch1 and watch2 Watches defined in Xray.
Scans all the tgz files located at the path/ti/files/ file-system directory using the policies defined for project-1.
Scans all the tgz files located in the current directory using the policies defined for the libs-local/release-artifacts/ path in Artifactory.
Scans all the tgz files located at the current directory. Show all known vulnerabilities, regardless of the policies defined in Xray.
This jf docker scan command scans docker containers located on the local file-system using the docker client and JFrog Xray. The containers don't need to be deployed to Artifactory or any other container registry before it can be scanned.
Note
This command requires:
Version 3.40.0 or above of Xray
Version 2.11.0 or above of JFrog CLI
Scan the local reg1/repo1/img1:1.0.0 container and show all known vulnerabilities, regardless of the policies defined in Xray.
Scan the local reg1/repo1/img1:1.0.0 container and show all violations according to the policy associated with my-project JFrog project.
Scan the local reg1/repo1/img1:1.0.0 container and show all violations according to the policy associated with my-watch Xray Watch.
Scan the local reg1/repo1/img1:1.0.0 container and show all violations according to the policy associated with the releases-local/app1/ path in Artifactory.
The ‘scan
’ command can be used to scan tarballs of Docker and OCI images on the local file system.
It requires saving the image on the file system as an uncompressed tarball using a compliant tool, and then scanning it with the ‘jf s
’ command. The image must be saved to the file system uncompressed, in a <name>.tar
file name.
Note
This command requires:
Version 3.61.5 or above of Xray.
Version 2.14.0 or above of JFrog CLI.
Use Docker client ‘docker save
’ command to save the image to the file system for scanning.
Example:
Use Skopeo CLI to save an image to the file system. Output image can be either OCI or Docker format.
Example:
Use Podman CLI to save an image to the file system. Output image can be either OCI or Docker format.
Example:
Use Kaniko ‘--tarPath’
flag to save built images to the file system, and later scan them with JFrog CLI. The example below is running Kaniko in Docker.
Example:
JFrog Curation defends your software supply chain, enabling early blocking of malicious or risky open-source packages before they even enter. Seamlessly identify harmful, vulnerable, or risky packages, ensuring increased security, compliance, and developer productivity.
The 'curation-audit' is a JFrog CLI command designed for developers to scan their projects and identify third-party dependencies that violate the restrictions set by the Curation service. This command provides detailed insights into the specific package policies that are being violated, leading to their blockage by the Curation service. Additionally, when feasible, 'curation-audit' may suggest alternative versions of the packages that comply with the Curation policies.
Moreover, curation-audit supports waiver requests for eligible violations. If configured in the policy, developers can select the blocked package and request a waiver from the policy owner.
Curation-audit command supported package managers and build systems:
Npm (npm)
Maven (mvn) - Requires xray 3.92 and above, and Artifactory 7.82 and above
Pip (pip) - Requires xray 3.92 and above, and Artifactory 7.82 and above
Go (go) - Requires xray 3.92 and above, and Artifactory 7.87 and above
Audit your Project with JFrog CLI curation-audit command
Prerequisites:
Connect JFrog CLI to JFrog Platform
Connect the JFrog CLI to your JFrog Platform instance by running the following command:
It should present Artifactory server just added (with default true)
Configure JFrog CLI for Project Ensure your project is configured in the JFrog CLI with the repository you would like to resolve dependencies from. Here are details for each package manager:
NPM:
MAVEN:
PIP:
GO:
Curation-Audit the project in the current directory. Displays all known packages that were blocked by Curation Policies.
Curation-Audit the projects according to the specific paths defined in the "working-dirs" option. Displays all known packages that were blocked by Curation Policies for all projects. The data is displayed in separate tables.
Curation-Audit the project in the current directory using 5 threads to check the packages Curation status in parallel. Displays all known packages blocked by Curation Policies.
Curation-Audit Waiver Request Process: The developer specifies the required row(s) from the table for the blocked policies. They then add a description and submit the request. A summary table is presented at the end of the process.
When used with JFrog Distribution, JFrog CLI uses the following syntax:
The following sections describe the commands available in the JFrog CLI for use with JFrog Distribution.
This commands creates and updates an unsigned Release Bundle on JFrog Distribution.
Note
Create a release bundle with name myApp and version 1.0.0. The release bundle will include the files defined in the File Spec specified by the --spec option.
Create a release bundle with name myApp and version 1.0.0. The release bundle will include the files defined in the File Spec specified by the --spec option. GPG sign the release bundle after it is created.
Update the release bundle with name myApp and version 1.0.0. The release bundle will include the files defined in the File Spec specified by the --spec option.
Update the release bundle with name myApp and version 1.0.0. The release bundle will include all the zip files inside the zip folder, located at the root of the my-local-repo repository.
Update the release bundle with name myApp and version 1.0.0. The release bundle will include all the zip files inside the zip folder, located at the root of the my-local-repo repository. The files will be distributed on the Edge Node to the target-zips folder, under the root of the my-target-repo repository.
This example creates a release bundle and applies "pathMapping" to the artifact paths after distributing the release bundle.
All occurrences of the "a1.in" file are fetched and mapped to the "froggy" repository at the edges.
Fetch all artifacts retrieved by the AQL query.
Create the release bundle with the artifacts and apply the path mappings at the edges after distribution.
The "pathMapping" option is provided, allowing users to control the destination of the release bundle artifacts at the edges.
Note: The "target" option is designed to work for most use cases. The "pathMapping" option is intended for specific use cases, such as including a list.manifest.json file inside the release bundle.
In that scenario, the distribution server dynamically includes all the manifest.json and their layers and assigns the given path mapping, whereas "target" doesn't achieve this.
Spec file content:
This command GPG signs an existing Release Bundle on JFrog Distribution.
Note
GPG sign the release bundle with name myApp and version 1.0.0.
This command distributes a release bundle to the Edge Nodes.
Note
Distribute the release bundle with name myApp and version 1.0.0. Use the distribution rules defined in the specified file.
This command deletes a Release Bundle from the Edge Nodes and optionally from Distribution as well.
Note
Delete the release bundle with name myApp and version 1.0.0 from the Edge Nodes only, according to the definition in the distribution rules file.
Delete the release bundle with name myApp and version 1.0.0 from the Edge Nodes, according to the definition in the distribution rules file. The release bundle will also be deleted from the Distribution service itself.
The offline-update command downloads updates to Xray's vulnerabilities database. The Xray UI allows building the command structure for you.
This command is used to upload files to Artifactory.
jf rt u [command options] <Source path> <Target path>
jf rt u --spec=<File Spec path> [command options]
Upload a file called froggy.tgz to the root of the my-local-repo repository.
Collect all the zip files located under the build directory (including subdirectories), and upload them to the my-local-repo repository, under the zipFiles folder, while maintaining the original names of the files.
Collect all the zip files located under the build directory (including subdirectories), and upload them to the my-local-repo repository, under the zipFiles folder, while maintaining the original names of the files. Also delete all files in the my-local-repo repository, under the zipFiles folder, except for the files which were uploaded by this command.
Collect all files located under the build directory (including subdirectories), and upload them to the my-release-local repository, under the files folder, while maintaining the original names of the artifacts. Exclude (do not upload) files, which include install as part of their path, and have the pack extension. This example uses a wildcard pattern. See Example 5, which uses regular expressions instead.
Collect all files located under the build directory (including subdirectories), and upload them to the my-release-local repository, under the files folder, while maintaining the original names of the artifacts. Exclude (do not upload) files, which include install as part of their path, and have the pack extension. This example uses a regular expression. See Example 4, which uses a wildcard pattern instead.
Collect all files located under the build directory and match the /*.zip ANT pattern, and upload them to the my-release-local repository, under the files folder, while maintaining the original names of the artifacts.
Package all files located under the build directory (including subdirectories) into a zip archive named archive.zip , and upload the archive to the my-local-repo repository,
This command is used to download files from Artifactory.
Download from Remote Repositories: By default, the command downloads only the files that are cached on the current Artifactory instance. It does not retrieve files from remote Artifactory instances accessed via remote or virtual repositories. To enable the command to download files from remote Artifactory instances (proxied through remote repositories), set the JFROG_CLI_TRANSITIVE_DOWNLOAD environment variable to true. This feature is available in Artifactory version 7.17 or later. Note that remote downloads are supported only for remote repositories that proxy other Artifactory instances. Downloads from remote repositories that proxy non-Artifactory repositories are not supported. IMPORTANT: Enabling the JFROG_CLI_TRANSITIVE_DOWNLOAD environment variable may increase the load on the remote Artifactory instance. It is advisable to use this setting cautiously.
jf rt dl [command options] <Source path> [Target path]
jf rt dl --spec=<File Spec path> [command options]
Download an artifact called cool-froggy.zip located at the root of the my-local-repo repository to the current directory.
Download all artifacts located under the all-my-frogs directory in the my-local-repo repository to the all-my-frogs folder under the current directory.
Download all artifacts located in the **my-local-repo **repository with a jar extension to the all-my-frogs folder under the current directory.
Download the latest file uploaded to the all-my-frogs folder in the my-local-repo repository.
This command is used to copy files in Artifactory
jf rt cp [command options] <Source path> <Target path>
jf rt cp --spec=<File Spec path> [command options]
Copy all artifacts located under /rabbit in the source-frog-repo repository into the same path in the target-frog-repo repository.
Copy all zip files located under /rabbit in the source-frog-repo repository into the same path in the target-frog-repo repository.
Copy all artifacts located under /rabbit in the source-frog-repo repository and with property "Version=1.0" into the same path in the target-frog-repo repository.
Copy all artifacts located under /rabbit in the source-frog-repo repository into the same path in the target-frog-repo repository without maintaining the original subdirectory hierarchy.
This command is used to move files in Artifactory
jf rt mv [command options] <Source path> <Target path>
jf rt mv --spec=<File Spec path> [command options]
Move all artifacts located under /rabbit in the source-frog-repo repository into the same path in the target-frog-repo repository.
Move all zip files located under /rabbit in the source-frog-repo repository into the same path in the target-frog-repo repository.
Move all artifacts located under /rabbit in the source-frog-repo repository and with property "Version=1.0" into the same path in the target-frog-repo repository .
Move all artifacts located under /rabbit in the source-frog-repo repository into the same path in the target-frog-repo repository without maintaining the original subdirectory hierarchy.
This command is used to delete files in Artifactory
jf rt del [command options] <Delete path>
jf rt del --spec=<File Spec path> [command options]
Delete all artifacts located under /rabbit in the frog-repo repository.
Delete all zip files located under /rabbit in the frog-repo repository.
This command is used to search and display files in Artifactory.
jf rt s [command options] <Search path>
jf rt s --spec=<File Spec path> [command options]
Display a list of all artifacts located under /rabbit in the frog-repo repository.
Display a list of all zip files located under /rabbit in the frog-repo repository.
Display a list of the files under example-repo-local with the following fields: path, actual_md5, modified_b, updated and depth.
This command is used for setting properties on existing files in Artifactory.
jf rt sp [command options] <Files pattern> <Files properties>
jf rt sp <artifact properties> --spec=<File Spec path> [command options]
Set the properties on all the zip files in the generic-local repository. The command will set the property "a" with "1" value and the property "b" with two values: "2" and "3".
The command will set the property "a" with "1" value and the property "b" with two values: "2" and "3" on all files found by the File Spec my-spec.
Set the properties on all the jar files in the maven-local repository. The command will set the property "version" with "1.0.0" value and the property "release" with "stable" value.
The command will set the property "environment" with "production" value and the property "team" with "devops" value on all files found by the File Spec prod-spec.
Set the properties on all the tar.gz files in the devops-local repository. The command will set the property "build" with "102" value and the property "branch" with "main" value.
This command is used for deleting properties from existing files in Artifactory.
jf rt delp [command options] <Files pattern> <Properties list>
jf rt delp <artifact properties> --spec=<File Spec path> [command options]
Remove the properties version
and release
from all the jar files in the maven-local repository.
Delete the properties build
and branch
from all tar.gz files in the devops-local repo.
Remove the properties status
, phase
and stage
from all deb files that start with DEV in the debian-repository.
Delete the environment
property from /tests/local/block.rpm
in the centos-repo.
Remove the properties component
, layer
and level
from files in the docker-hub repository.
The jf audit command allows scanning your source code dependencies to find security vulnerabilities and licenses violations, with the ability to scan against your Xray policies. The command builds a deep dependencies graph for your project, scans it with Xray, and displays the results. It uses the package manager used by the project to build the dependencies graph. Currently, the following package managers are supported.
Maven (mvn) - Version 3.1.0 or above of Maven is supported.
Gradle (gradle)
Npm (npm)
Pnpm (pnpm)
Yarn (yarn)
Pip (pip)
Pipenv (pipenv)
Poetry (poetry)
Go Modules (go)
NuGet (nuget)
.NET Core CLI (dotnet)
CocoaPods (pod)
SwiftPM (swift)
Conan (C++)
The command will detect the package manager used by the project automatically. It requires version 3.29.0 or above of Xray and also version 2.13.0 or above of JFrog CLI.
Vulnerability Contextual Analysis: This feature uses the code context to eliminate false positive reports on vulnerable dependencies that are not applicable to the code. Vulnerability Contextual Analysis is currently supported for Python, Go and JavaScript code.
Secrets Detection: Detect any secrets left exposed inside the code. to stop any accidental leak of internal tokens or credentials.
Infrastructure as Code scans (IaC): Scan Infrastructure as Code (Terraform) files for early detection of cloud and infrastructure misconfigurations.
Note
The jf audit command does not extract the internal content of the scanned dependencies. This means that if a package includes other vulnerable components bundled inside the binary, they may not be shown as part of the results. This is contrary to the jf scan command, which drills down into the package content.
To generate the dependency tree for scanning purposes, the system will execute an install
command on the project if it hasn't been executed previously.
Audit the project at the current directory. Show all known vulnerabilities, regardless of the policies defined in Xray.
Audit the project at the current directory. Show all known vulnerabilities, regardless of the policies defined in Xray. Show only maven and npm vulnerabilities.
Audit the project at the current directory using a watch named watch1 watch defined in Xray.
Audit the project at the current directory using watch1 and _watch2_ defined in Xray.
Audit the project at the current directory using the policies defined for project-1.
Audit the project at the current directory using the policies defined for the libs-local/release-artifacts/ path in Artifactory.
Audit the project in the current directory, excluding all files inside the node_modules directory and files with the to_exclude suffix.
The sbom enrichment command takes an exported SBOM file (Only CycloneDX format) in XML/JSON format and enriches your file with package vulnerabilities found by XRAY.
This jf sbom enrich <file_path> command enriches a file that is found on file_path.
Note
This command requires:
Version 3.101.3 or above of Xray
Version 2.60.0 or above of JFrog CLI
Enriches an XML file
Enriches a JSON file
Build-info is collected by adding the --build-name
and --build-number
options to different CLI commands. The CLI commands can be run several times and cumulatively collect build-info for the specified build name and number until it is published to Artifactory. For example, running the jf rt download
command several times with the same build name and number will accumulate each downloaded file in the corresponding build-info.
Dependencies are collected by adding the --build-name
and --build-number
options to the jf rt download
command.
For example, the following command downloads the cool-froggy.zip
file found in repository my-local-repo
, but it also specifies this file as a dependency in build my-build-name
with build number 18:
Build artifacts are collected by adding the --build-name
and --build-number
options to the jf rt upload
command.
For example, the following command specifies that file froggy.tgz
uploaded to repository my-local-repo
is a build artifact of build my-build-name
with build number 18:
This command is used to collect environment variables and attach them to a build.
Environment variables are collected using the build-collect-env
(bce
) command.
jf rt bce <build name> <build number>
The following table lists the command arguments and flags:
Example 1
The following command collects all currently known environment variables, and attaches them to the build-info for build my-build-name
with build number 18:
Example 2
Collect environment variables for build name: frogger-build and build number: 17
The build-add-git
(bag) command collects the Git revision and URL from the local .git directory and adds it to the build-info. It can also collect the list of tracked project issues (for example, issues stored in JIRA or other bug tracking systems) and add them to the build-info. The issues are collected by reading the git commit messages from the local git log. Each commit message is matched against a pre-configured regular expression, which retrieves the issue ID and issue summary. The information required for collecting the issues is retrieved from a yaml configuration file provided to the command.
jf rt bag [command options] <build name> <build number> [Path To .git]
The following table lists the command arguments and flags:
This is the configuration file structure.
The download command, as well as other commands which download dependencies from Artifactory accept the --build-name and --build-number command options. Adding these options records the downloaded files as build dependencies. In some cases however, it is necessary to add a file, which has been downloaded by another tool, to a build. Use the build-add-dependencies command to this.
By default, the command collects the files from the local file system. If you'd like the files to be collected from Artifactory however, add the --from-rt option to the command.
jf rt bad [command options] <build name> <build number> <pattern>
jf rt bad --spec=<File Spec path> [command options] <build name> <build number>
Example 1
Add all files located under the path/to/build/dependencies/dir directory as dependencies of a build. The build name is my-build-name and the build number is 7. The build-info is only updated locally. To publish the build-info to Artifactory use the jf rt build-publish command.
Example 2
Add all files located in the m-local-repo Artifactory repository, under the dependencies folder, as dependencies of a build. The build name is my-build-name and the build number is 7. The build-info is only updated locally. To publish the build-info to Artifactory use the jf rt build-publish command.
Example 3
Add all files located under the path/to/build/dependencies/dir directory as dependencies of a build. The build name is my-build-name, the build number is 7 and module is m1. The build-info is only updated locally. To publish the build-info to Artifactory use the jf rt build-publish command.
This command is used to publish build info to Artifactory. To publish the accumulated build-info for a build to Artifactory, use the build-publish command. For example, the following command publishes all the build-info collected for build my-build-name with build number 18:
jf rt bp [command options] <build name> <build number>
Publishes to Artifactory all the build-info collected for build my-build-name with build number 18
The build-info, which is collected and published to Artifactory by the jf rt build-publish command, can include multiple modules. Each module in the build-info represents a package, which is the result of a single build step, or in other words, a JFrog CLI command execution. For example, the following command adds a module named m1 to a build named my-build with 1 as the build number:
The following command, adds a second module, named m2 to the same build:
You now publish the generated build-info to Artifactory using the following command:
Now that you have your build-info published to Artifactory, you can perform actions on the entire build. For example, you can download, copy, move or delete all or some of the artifacts of a build. Here's how you do this.
In some cases though, your build is composed of multiple build steps, which are running on multiple different machines or spread across different time periods. How do you aggregate those build steps, or in other words, aggregate those command executions, into one build-info?
The way to do this, is to create a separate build-info for every section of the build, and publish it independently to Artifactory. Once all the build-info instances are published, you can create a new build-info, which references all the previously published build-info instances. The new build-info can be viewed as a "master" build-info, which references other build-info instances.
So the next question is - how can this reference between the two build-instances be created?
The way to do this is by using the build-append command. Running this command on an unpublished build-info, adds a reference to a different build-info, which has already been published to Artifactory. This reference is represented by a new module in the new build-info. The ID of this module will have the following format: **<referenced build name>/<referenced build number>.
Now, when downloading the artifacts of the "master" build, you'll actually be downloading the artifacts of all of its referenced builds. The examples below demonstrates this,
jf rt ba <build name> <build number> <build name to append> <build number to append>
Requirements
Artifactory version 7.25.4 and above.
This script illustrates the process of creating two build-info instances, publishing both to Artifactory, and subsequently generating a third build-info that consolidates the published instances before publishing it to Artifactory.
jf rt bpr [command options] <build name> <build number> <target repository>
This example involves moving the artifacts associated with the published build-info, identified by the build name 'my-build-name' and build number '18', from their existing Artifactory repository to a new Artifactory repository called 'target-repository'.
Build-info is accumulated by the CLI according to the commands you apply until you publish the build-info to Artifactory. If, for any reason, you wish to "reset" the build-info and cleanup (i.e. delete) any information accumulated so far, you can use the build-clean
(bc
) command.
jf rt bc <build name> <build number>
The following table lists the command arguments and flags:
The following command cleans up any build-info collected for build my-build-name
with build number 18:
jf rt bdi [command options] <build name>
The following table lists the command arguments and flags:
Discard the oldest build numbers of build my-build-name from Artifactory, leaving only the 10 most recent builds.
Discard the oldest build numbers of build my-build-name from Artifactory, leaving only builds published during the last 7 days.
Discard the oldest build numbers of build my-build-name from Artifactory, leaving only builds published during the last 7 days. b20 and b21 will not be discarded.
The transfer-files command allows transferring (copying) all the files stored in one Artifactory instance to a different Artifactory instance. The command allows transferring the files stored in a single or multiple repositories. The command expects the relevant repository to already exist on the target instance and have the same name and type as the repositories on the source.
Artifacts in remote repositories caches are not transferred.
The files transfer process allows transferring files that were created or modified on the source instance after the process started. However, files that were deleted on the source instance after the process started, are not deleted on the target instance by the process.
The files transfer process allows transferring files that were created or modified on the source instance after the process started. The custom properties of those files are also updated on the target instance. However, if only the custom properties of those file were modified on the source, but not the files' content, the properties are not modified on the target instance by the process.
The source and target repositories should have the same name and type.
Since the file are pushed from the source to the target instance, the source instance must have network connection to the target.
Ensure that you can log in to the UI of both the source and target instances with users that have admin permissions and that you have the connection details (including credentials) to both instances.
Ensure that all the repositories on source Artifactory instance which files you'd like to transfer, also exist on the target instance, and have the same name and type on both instances.
Ensure that JFrog CLI is installed on a machine that has network access to both the source and target instances.
To set up the source instance for files transfer, you must install the data-transfer user plugin in the primary node of the source instance. This section guides you through the installation steps.
Configure the connection details of the source Artifactory instance with your admin credentials by running the following command from the terminal.
Ensure that the JFROG_HOME environment variable is set and holds the value of JFrog installation directory. It usually points to the /opt/jfrog directory. In case the variable isn't set, set its value to point to the correct directory as described in the JFrog Product Directory Structure article.
Run the following command to start pushing the files from all the repositories in source instance to the target instance.
This command may take a few days to push all the files, depending on your system size and your network speed. While the command is running, It displays the transfer progress visually inside the terminal.
If you're running the command in the background, you use the following command to view the transfer progress.
In case you do not wish to transfer the files from all repositories, or wish to run the transfer in phases, you can use the --include-repos
and --exclude-repos
command options. Run the following command to see the usage of these options.
You can stop the transfer process by hitting on CTRL+C if the process is running in the foreground, or by running the following command, if you're running the process in the background.
The process will continue from the point it stopped when you re-run the command.
A path to an errors summary file will be printed at the end of the run, referring to a generated CSV file. Each line on the summary CSV represents an error of a file that failed to be transferred. On subsequent executions of the jf rt transfer-files
command, JFrog CLI will attempt to transfer these files again.
Once the jf rt transfer-files
command finishes transferring the files, you can run it again to transfer files which were created or modified during the transfer. You can run the command as many times as needed. Subsequent executions of the command will also attempt to transfer files failed to be transferred during previous executions of the command.
Note:
To install the data-transfer user plugin on the source machine manually, follow these steps.
Download the following two files from a machine that has internet access. Download data-transfer.jar from https://releases.jfrog.io/artifactory/jfrog-releases/data-transfer/[RELEASE]/lib/data-transfer.jar and dataTransfer.groovy from https://releases.jfrog.io/artifactory/jfrog-releases/data-transfer/[RELEASE]/dataTransfer.groovy
Create a new directory on the primary node machine of the source instance and place the two files you downloaded inside this directory.
Install the data-transfer user plugin by running the following command from the terminal. Replace the [plugin files dir] token with the full path to the directory which includes the plugin files you downloaded.
Install JFrog CLI on your source instance by using one of the [#JFrog CLI Installers]. For example:
Note
If the source instance is running as a docker container, and you're not able to install JFrog CLI while inside the container, follow these steps.
Connect to the host machine through the terminal.
Download the JFrog CLI executable into the correct directory by running this command:
curl -fL https://getcli.jfrog.io/v2-jf | sh
Copy the JFrog CLI executable you've just downloaded into the container, by running the following docker command. Make sure to replace [the container name] with the name of the container.
docker cp jf [the container name]:/usr/bin/jf
Connect to the container and run the following command to ensure JFrog CLI is installed:
jf -v
The jf rt transfer-files
command pushes the files from the source instance to the target instance as follows:
The files are pushed for each repository, one by one in sequence.
For each repository, the process includes the following three phases:
Phase 1 pushes all the files in the repository to the target.
Phase 2 pushes files which have been created or modified after phase 1 started running (diffs).
Phase 3 attempts to push files which failed to be transferred in earlier phases (Phase 1 or Phase 2) or in previous executions of the command.
If Phase 1 finished running for a specific repository, and you run the jf rt transfer-files
command again, only Phase 2 and Phase 3 will be triggered. You can run the jf rt transfer-files
as many times as needed, till you are ready to move your traffic to the target instance permanently. In any subsequent run of the command, Phase 2 will transfer the newly created and modified files and Phase 3 will retry transferring files which failed to be transferred in previous phases and also in previous runs of the command.
Using Replication
To help reduce the time it takes for Phase 2 to run, you may configure Event Based Push Replication for some or all of the local repositories on the source instance. With Replication configured, when files are created or updated on the source repository, they are immediately replicated to the corresponding repository on the target instance. The replication can be configured at any time. Before, during or after the files transfer process.
You can run the jf rt transfer-files
command multiple times. This is needed to allow transferring files which have been created or updated after previous command executions. To achieve this, JFrog CLI stores the current state of the files transfer process in a directory named transfer located under the JFrog CLI home directory. You can usually find this directory at this location ~/.jfrog/transfer
.
JFrog CLI uses the state stored in this directory to avoid repeating transfer actions performed in previous executions of the command. For example, once Phase 1 is completed for a specific repository, subsequent executions of the command will skip Phase 1 and run Phase 2 and Phase 3 only.
In case you'd like to ignore the stored state, and restart the files transfer from scratch, you can add the --ignore-state
option to the jf rt transfer-files
command.
It is recommended to run the transfer-files
command from a machine that has network access to the source Artifactory URL. This allows spreading the transfer load on all the Artifactory cluster nodes. This machine should also have network access to the target Artifactory URL.
Follows these steps to installing JFrog CLI on that machine.
Install JFrog CLI by using one of the [#JFrog CLI Installers]. For example:
If your source instance is accessible only through an HTTP/HTTPS proxy, set the proxy environment variable as described [#here].
Configure the connection details of the source Artifactory instance with your admin credentials. Run the following command and follow the instructions.
Configure the connection details of the target Artifactory instance as follows.
The jf rt transfer-files
command pushes the binaries from the source instance to the target instance. This transfer can take days, depending on the size of the total data transferred, the network bandwidth between the source and the target instance, and additional factors.
Since the process is expected to run while the source instance is still being used, monitor the instance to ensure that the transfer does not add too much load to it. Also, you might decide to increase the load for faster a transfer rate, while you monitor the transfer. This section describes how to control the file transfer speed.
By default, the jf rt transfer-files
command uses 8 working threads to push files from the source instance to the target instance. Reducing this value will cause slower transfer speed and lower load on the source instance, and increasing it will do the opposite. We therefore recommend increasing it gradually. This value can be changed while the jf rt transfer-files
command is running. There's no need to stop the process to change the total of working threads. The new value set will be cached by JFrog CLI and also used for subsequent runs from the same machine. To set the value, simply run the following interactive command from a new terminal window on the same machine which runs the jf rt transfer-files
command.
Build-info repositories
When transferring files in build-info repositories, JFrog CLI limits the total of working threads to 8. This is done in order to limit the load on the target instance while transferring build-info.
The jf rt transfer-files
command pushes the files directly from the source to the target instance over the network. In case the traffic from the source instance needs to be routed through an HTTPS proxy, follow these steps.
When running the jf rt transfer-files
command, add the --proxy-key
option to the command, with Proxy Key you configured in the UI as the option value. For example, if the Proxy Key you configured is my-proxy-key, run the command as follows:
Note
When used with JFrog Release Lifecycle Management, JFrog CLI uses the following syntax:
Published build info:
<build-number>
is optional; the latest build will be used if empty. includeDeps
is optional, false by default. project
is optional; the default project will be used if empty.
Existing Release Bundles:
project
is optional; the default project will be used if empty.
A pattern of artifacts in Artifactory:
Only pattern
is mandatory. recursive
is true by default.
AQL query:
Only a single AQL query may be provided.
Create a Release Bundle using file spec variables.
Create a Release Bundle synchronously, in project "project0".
Create a Release Bundle using build name and build number variables.
This command allows promoting a Release Bundle to a target environment.
Promote a Release Bundle named "myApp" version "1.0.0" to environment "PROD". Use signing key pair "myKeyPair".
Promote a Release Bundle synchronously to environment "PROD". The Release Bundle is named "myApp", version "1.0.0", of project "project0". Use signing key pair "myKeyPair".
Promote a Release Bundle while including certain repositories.
Promote a Release Bundle while excluding certain repositories.
Promote a Release Bundle, using promotion type flag.
This command distributes a Release Bundle to an Edge node.
Distribution Rules Structure
The Distribution Rules format also supports wildcards. For example:
Distribute the Release Bundle named myApp with version 1.0.0. Use the distribution rules defined in the specified file.
Distribute the Release Bundle named myApp with version 1.0.0 using the default distribution rules. Map files under the source
directory to be placed under the target
directory.
Synchronously distribute a Release Bundle associated with project "proj"
This command allows deleting all Release Bundle promotions to a specified environment or deleting a Release Bundle locally altogether. Deleting locally means distributions of the Release Bundle will not be deleted.
Locally delete the Release Bundle named myApp with version 1.0.0.
Locally delete the Release Bundle named myApp with version 1.0.0. Run the command synchronously and skip the confirmation message.
Delete all promotions of the specified Release Bundle version to environment "PROD".
This command will delete distributions of a Release Bundle from a distribution target, such as an Edge node.
Delete the distributions of version 1.0.0 of the Release Bundle named myApp from Edge nodes matching the provided distribution rules defined in the specified file.
Delete the distributions of the Release Bundle associated with project "proj" from the provided Edge nodes. Run the command synchronously and skip the confirmation message.
Release Lifecycle Management supports distributing your Release Bundles to remote Edge nodes within an air-gapped environment. This use case is mainly intended for organizations that have two or more JFrog instances that have no network connection between them.
The following command allows exporting a Release Bundle as an archive to the filesystem that can be transferred to a different instance in an air-gapped environment.
Export version 1.0.0 of the Release Bundle named "myApp":
Download the file to a specific location:
You can import a Release Bundle archive from the exported zip file.
Please note this functionality only works on Edge nodes within an air-gapped environment.
Import version 1.0.0 of a Release Bundle named "myExportedApp":
Use the following command to download the contents of a Release Bundle v2 version:
JFrog CLI includes integration with Maven, allowing you to resolve dependencies and deploy build artifacts from and to Artifactory, while collecting build-info and storing it in Artifactory.
Before using the jf mvn command, the project needs to be pre-configured with the Artifactory server and repositories, to be used for building and publishing the project. The jf mvn-config command should be used once to add the configuration to the project. The command should run while inside the root directory of the project. The configuration is stored by the command in the .jfrog directory at the root directory of the project.
The mvn command triggers the maven client, while resolving dependencies and deploying artifacts from and to Artifactory.
Note: Before running the mvn command on a project for the first time, the project should be configured with the jf mvn-config command.
The following table lists the command arguments and flags:
The deployment to Artifacts is triggered both by the deployment and install phases. To disable artifacts deployment, add -Dartifactory.publish.artifacts=false to the list of goals and options. For example: "jf mvn clean install -Dartifactory.publish.artifacts=false"
Run clean and install with maven.
JFrog CLI includes integration with Gradle, allowing you to resolve dependencies and deploy build artifacts from and to Artifactory, while collecting build-info and storing it in Artifactory.
Before using the gradle command, the project needs to be pre-configured with the Artifactory server and repositories, to be used for building and publishing the project. The gradle-config command should be used once to add the configuration to the project. The command should run while inside the root directory of the project. The configuration is stored by the command in the**.jfrog** directory at the root directory of the project.
The jf gradle command triggers the gradle client, while resolving dependencies and deploying artifacts from and to Artifactory.
Note: Before running the jf gradle command on a project for the first time, the project should be configured with the jf gradle-config command.
The following table lists the command arguments and flags:
Build the project using the artifactoryPublish task, while resolving and deploying artifacts from and to Artifactory.
For integrating with Maven and Gradle, JFrog CLI uses the build-info-extractor jars files. These jar files are downloaded by JFrog CLI from jcenter the first time they are needed.
If you're using JFrog CLI on a machine which has no access to the internet, you can configure JFrog CLI to download these jar files from an Artifactory instance. Here's how to configure Artifactory and JFrog CLI to download the jars files.
Set the JFROG_CLI_EXTRACTORS_REMOTE environment variable with the server ID of the Artifactory server you configured, followed by a slash, and then the name of the repository you created. For example my-rt-server/extractors
JFrog CLI includes integration with MSBuild and Artifactory, allowing you to resolve dependencies and deploy build artifacts from and to Artifactory, while collecting build-info and storing it in Artifactory. This is done by having JFrog CLI in your search path and adding JFrog CLI commands to the MSBuild csproj
file.
JFrog CLI provides full support for pulling and publishing docker images from and to Artifactory using the docker client running on the same machine. This allows you to collect build-info for your docker build and then publish it to Artifactory. You can also promote the pushed docker images from one repository to another in Artifactory.
To build and push your docker images to Artifactory, follow these steps:
Make sure that the installed docker client has version 17.07.0-ce (2017-08-29) or above. To verify this, run docker -v**
To ensure that the docker client and your Artifactory docker registry are correctly configured to work together, run the following code snippet.
If everything is configured correctly, pushing any image including the hello-world image should be successfully uploaded to Artifactory.
Note: When running the docker-pull and docker-push commands, the CLI will first attempt to log in to the docker registry. In case of a login failure, the command will not be executed.
The following table lists the command arguments and flags:
The subsequent command utilizes the docker client to pull the 'my-docker-registry.io/my-docker-image:latest' image from Artifactory. This operation logs the image layers as dependencies of the local build-info identified by the build name 'my-build-name' and build number '7'. This local build-info can subsequently be released to Artifactory using the command 'jf rt bp my-build-name 7'.
After building your image using the docker client, the jf docker push command pushes the image layers to Artifactory, while collecting the build-info and storing it locally, so that it can be later published to Artifactory, using the jf rt build-publish command.
The following table lists the command arguments and flags:
The subsequent command utilizes the docker client to push the 'my-docker-registry.io/my-docker-image:latest' image to Artifactory. This operation logs the image layers as artifacts of the local build-info identified by the build name 'my-build-name' and build number '7'. This local build-info can subsequently be released to Artifactory using the command 'jf rt bp my-build-name 7'.
The following table lists the command arguments and flags:
In this example, podman is employed to pull the local image 'my-docker-registry.io/my-docker-image:latest' from the docker-local Artifactory repository. During this process, it registers the image layers as dependencies within a build-info identified by the build name 'my-build-name' and build number '7'. This build-info is initially established locally and must be subsequently published to Artifactory using the command 'jf rt build-publish my-build-name 7'.
The following table lists the command arguments and flags:
In this illustration, podman is employed to push the local image 'my-docker-registry.io/my-docker-image:latest' to the docker-local Artifactory repository. During this process, it registers the image layers as artifacts within a build-info identified by the build name 'my-build-name' and build number '7'. This build-info is initially established locally and must be subsequently published to Artifactory using the command 'jf rt build-publish my-build-name 7'.
The build-docker-create command allows adding a docker image, which is already published to Artifactory, into the build-info. This build-info can be later published to Artifactory, using the build-publish command.
In this example, a Docker image that has already been deployed to Artifactory is incorporated into a locally created, unpublished build-info identified by the build name myBuild
and build number '1'. This local build-info can subsequently be published to Artifactory using the command 'jf rt bp myBuild 1'.
Promotion is the action of moving or copying a group of artifacts from one repository to another, to support the artifacts' lifecycle. When it comes to docker images, there are two ways to promote a docker image which was pushed to Artifactory:
Create build-info for the docker image, and then promote the build using the jf rt build-promote command.
Use the jf rt docker-promote command as described below.
The following table lists the command arguments and flags:
Promote the hello-world docker image from the docker-dev-local repository to the docker-staging-local repository.
JFrog CLI provides full support for building npm packages using the npm client. This allows you to resolve npm dependencies, and publish your npm packages from and to Artifactory, while collecting build-info and storing it in Artifactory.
Follow these guidelines when building npm packages:
When the npm-publish command runs, JFrog CLI runs the pack command in the background. The pack action is followed by an upload, which is not based on the npm client's publish command. Therefore, If your npm package includes the prepublish or postpublish scripts, rename them to prepack and postpack respectively.
Requirements
Npm client version 5.4.0 and above.
Artifactory version 5.5.2 and above.
Before using the jf npm install, jf npm ci and jf npm publish commands, the project needs to be pre-configured with the Artifactory server and repositories, to be used for building and publishing the project. The jf npm-config command should be used once to add the configuration to the project. The command should run while inside the root directory of the project. The configuration is stored by the command in the .jfrog directory at the root directory of the project.
The jf npm install and jf npm ci commands execute npm's install and ci commands respectively, to fetches the npm dependencies from the npm repositories.
Before running the jf npm install or jf npm ci command on a project for the first time, the project should be configured using the jf npm-config command.
The following table lists the command arguments and flags:
Example 1
Example 2
The following example installs the dependencies. The dependencies are resolved from the Artifactory server and repository configured by npm-config command.
Example 3
The following example installs the dependencies using the npm-ci command. The dependencies are resolved from the Artifactory server and repository configured by npm-config command.
The npm-publish command packs and deploys the npm package to the designated npm repository.
Before running the npm-publish command on a project for the first time, the project should be configured using the jf npm-config command. This configuration includes the Artifactory server and repository to which the package should deploy.
Warning: If your npm package includes the prepublish or postpublish scripts, please refer to the guidelines above.
The following table lists the command arguments and flags:
JFrog CLI provides full support for building npm packages using the yarn client. This allows you to resolve npm dependencies, while collecting build-info and storing it in Artifactory. You can download npm packages from any npm repository type - local, remote or virtual. Publishing the packages to a local npm repository is supported through the jf rt upload command.
Yarn version 2.4.0 and above is supported.
Before using the jf yarn command, the project needs to be pre-configured with the Artifactory server and repositories, to be used for building the project. The yarn-config command should be used once to add the configuration to the project. The command should run while inside the root directory of the project. The configuration is stored by the command in the .jfrog directory at the root directory of the project.
The jf yarn command executes the yarn client, to fetch the npm dependencies from the npm repositories.
Note: Before running the command on a project for the first time, the project should be configured using the jf yarn-config command.
The following table lists the command arguments and flags:
Example 1
Example 2
The following example installs the dependencies. The dependencies are resolved from the Artifactory server and repository configured by jf yarn-config command.
JFrog CLI provides full support for building Go packages using the Go client. This allows resolving Go dependencies from and publish your Go packages to Artifactory, while collecting build-info and storing it in Artifactory.
JFrog CLI client version 1.20.0 and above.
Artifactory version 6.1.0 and above.
Go client version 1.11.0 and above.
Before you can use JFrog CLI to build your Go projects with Artifactory, you first need to set the resolutions and deployment repositories for the project.
Here's how you set the repositories.
'cd' into to the root of the Go project.
Run the jf go-config command.
Example 1
Set repositories for this go project.
Example 2
Set repositories for all go projects on this machine.
The go command triggers the go client.
Note: Before running the go command on a project for the first time, the project should be configured using the jf go-config command.
The following table lists the command arguments and flags:
Example 1
The following example runs Go build command. The dependencies resolved from Artifactory via the go-virtual repository.
Note: Before using this example, please make sure to set repositories for the Go project using the go-config command.
Example 2
Note: Before using this example, please make sure to set repositories for the Go project using the go-config command.
The jf go-publish command packs and deploys the Go package to the designated Go repository in Artifactory.
Note: Before running the jf go-publish command on a project for the first time, the project should be configured using the jf go-config command.
The following table lists the command arguments and flags:
Example 1
To pack and publish the Go package, run the following command. Before running this command on a project for the first time, the project should be configured using the jf go-config command.
Example 2
JFrog CLI provides full support for building Python packages using the pip and pipenv package managers, and deploying distributions using twine. This allows resolving python dependencies from Artifactory, using for pip and pipenv, while recording the downloaded packages. After installing and packaging the project, the distributions and wheels can be deployed to Artifactory using twine, while recording the uploaded packages. The downloaded packages are stored as dependencies in the build-info stored in Artifactory, while the uploaded ones are stored as artifacts.
Before you can use JFrog CLI to build your Python projects with Artifactory, you first need to set the repository for the project.
Here's how you set the repositories.
'cd' into the root of the Python project.
Run the jf pip-config or jf pipenv-config commands, depending on whether you're using the pip or pipenv clients.
Commands Params
Examples
Example 1
Set repositories for this Python project when using the pip client (for pipenv: jf pipec
).
Example 2
Set repositories for all Python projects using the pip client on this machine (for pipenv: jf pipec --global
).
The jf pip install and jf pipenv install commands use the pip and pipenv clients respectively, to install the project dependencies from Artifactory. The jf pip install and jf pipenv install commands can also record these packages as build dependencies as part of the build-info published to Artifactory.
Note: Before running the pip install and pipenv install commands on a project for the first time, the project should be configured using the jf pip-config or jf pipenv-config commands respectively.
Recording all dependencies
JFrog CLI records the installed packages as build-info dependencies. The recorded dependencies are packages installed during the jf pip install and jf pipenv install command execution. When running the command inside a Python environment, which already has some of the packages installed, the installed packages will not be included as part of the build-info, because they were not originally installed by JFrog CLI. A warning message will be added to the log in this case.
How to include all packages in the build-info?
The details of all the installed packages are always cached by the jf pip install and jf pipenv install command in the .jfrog/projects/deps.cache.json file, located under the root of the project. JFrog CLI uses this cache for including previously installed packages in the build-info.
If the Python environment had some packages installed prior to the first execution of the install
command, those previously installed packages will be missing from the cache and therefore will not be included in the build-info.
Commands Params
Examples
Example 1
The following command triggers pip install, while recording the build dependencies as part of build name my-build and build number 1 .
Example 2
The following command triggers pipenv install, while recording the build dependencies as part of build name my-build and build number 1 .
The jf twine upload command uses the twine, to publish the project distributions to Artifactory. The jf twine upload command can also record these packages as build artifacts as part of the build-info published to Artifactory.
Note: Before running the twine upload command on a project for the first time, the project should be configured using the jf pip-config or jf pipenv-config commands, with deployer configuration.
Commands Params
Examples
Example 1
The following command triggers twine upload, while recording the build artifacts as part of build name my-build and build number 1 .
JFrog CLI provides partial support for building Python packages using the poetry package manager. This allows resolving python dependencies from Artifactory, but currently does NOT record downloaded packages as dependencies in the build-info.
Before you can use JFrog CLI to build your Python projects with Artifactory, you first need to set the repository for the project.
Here's how you set the repositories.
'cd' into the root of the Python project.
Run the jf poetry-config command as follows.
Commands Params
Examples
Example 1
Set repositories for this Python project when using the poetry client.
Example 2
Set repositories for all Python projects using the poetry client on this machine.
The jf poetry install commands use the poetry client to install the project dependencies from Artifactory.
Note: Before running the poetry install command on a project for the first time, the project should be configured using the jf poetry-config command.
Commands Params
Examples
Example 1
The following command triggers poetry install, while resolving dependencies from Artifactory.
JFrog CLI provides full support for restoring NuGet packages using the NuGet client or the .NET Core CLI. This allows you to resolve NuGet dependencies from and publish your NuGet packages to Artifactory, while collecting build-info and storing it in Artifactory.
NuGet dependencies resolution is supported by the jf nuget
command, which uses the NuGet client or the jf dotnet
command, which uses the .NET Core CLI.
Before using thenuget or dotnet commands, the project needs to be pre-configured with the Artifactory server and repository, to be used for building the project.
Before using the nuget or dotnet commands, the nuget-config or dotnet-config commands should be used respectively. These commands configure the project with the details of the Artifactory server and repository, to be used for the build. The nuget-config or dotnet-config commands should be executed while inside the root directory of the project. The configuration is stored by the command in the .jfrog directory at the root directory of the project. You then have the option of storing the .jfrog directory with the project sources, or creating this configuration after the sources are checked out.
The following table lists the commands' options:
The nuget command runs the NuGet client and the dotnet command runs the **.NET Core CLI.
Before running the nuget command on a project for the first time, the project should be configured using the nuget-config command.
Before running the dotnet command on a project for the first time, the project should be configured using the dotnet-config command.
The following table lists the commands arguments and options:
Example 1
Run nuget restore for the solution at the current directory, while resolving the NuGet dependencies from the pre-configured Artifactory repository. Use the NuGet client for this command
Example 2
Run dotnet restore for the solution at the current directory, while resolving the NuGet dependencies from the pre-configured Artifactory repository. Use the .NET Core CLI for this command
Example 3
Run dotnet restore for the solution at the current directory, while resolving the NuGet dependencies from the pre-configured Artifactory repository.
JFrog CLI supports packaging Terraform modules and publishing them to a Terraform repository in Artifactory using the jf terraform publish command.
Before using the jf terraform publish command for the first time, you first need to configure the Terraform repository for your Terraform project. To do this, follow these steps:
'cd' into the root directory for your Terraform project.
Run the interactive jf terraform-config command and set deployment repository name.
The jf terraform-config command will store the repository name inside the .jfrog directory located in the current directory. You can also add the --global command option, if you prefer the repository configuration applies to all projects on the machine. In that case, the configuration will be saved in JFrog CLI's home directory.
The following table lists the command options:
Example 1
Configuring the Terraform repository for a project, while inside the root directory of the project
Example 2
Configuring the Terraform repository for all projects on the machine
The terraform publish command creates a terraform package for the module in the current directory, and publishes it to the configured Terraform repository in Artifactory.
The following table lists the commands arguments and options:
Example 1
The command creates a package for the Terraform module in the current directory, and publishes it to the Terraform repository (configured by the jf tfc command) with the provides namespace, provider and tag.
Example 2
The command creates a package for the Terraform module in the current directory, and publishes it to the Terraform repository (configured by the jf tfc command) with the provides namespace, provider and tag. The published package will not include the module paths which include either test or ignore .
Example 3
The command creates a package for the Terraform module in the current directory, and publishes it to the Terraform repository (configured by the jf tfc command) with the provides namespace, provider and tag. The published module will be recorded as an artifact of a build named my-build with build number 1. The jf rt bp command publishes the build to Artifactory.
JFROG_CLI_LOG_LEVEL
[Default: INFO]
This variable determines the log level of the JFrog CLI. Possible values are: DEBUG, INFO, WARN and ERROR. If set to ERROR, JFrog CLI logs error messages only. It is useful when you wish to read or parse the JFrog CLI output and do not want any other information logged.
JFROG_CLI_LOG_TIMESTAMP
[Default: TIME]
Controls the log messages timestamp format. Possible values are: TIME, DATE_AND_TIME, and OFF.
JFROG_CLI_HOME_DIR
[Default: ~/.jfrog]
Defines the JFrog CLI home directory.
JFROG_CLI_TEMP_DIR
[Default: The operating system's temp directory]
Defines the temp directory used by JFrog CLI.
JFROG_CLI_PLUGINS_SERVER
[Default: Official JFrog CLI Plugins registry]
Configured Artifactory server ID from which to download JFrog CLI Plugins.
JFROG_CLI_PLUGINS_REPO
[Default: 'jfrog-cli-plugins']
Can be optionally used with the JFROG_CLI_PLUGINS_SERVER environment variable. Determines the name of the local repository to use.
JFROG_CLI_RELEASES_REPO
Configured Artifactory repository name from which to download the jar needed by the mvn/gradle command. This environment variable's value format should be server ID configured by the 'jf c add' command. The repository should proxy . This environment variable is used by the 'jf mvn' and 'jf gradle' commands, and also by the 'jf audit' command, when used for maven or gradle projects.
JFROG_CLI_SERVER_ID
Server ID configured using the 'jf config' command, unless sent as a command argument or option.
CI
[Default: false]
If true, disables interactive prompts and progress bar.
HTTP_PROXY
Determines a URL to an HTTP proxy.
HTTPS_PROXY
Determines a URL to an HTTPS proxy.
NO_PROXY
Use this variable to bypass the proxy to IP addresses, subnets or domains. This may contain a comma-separated(,) list of hostnames or IPs without protocols and ports in standard Go NO_PROXY syntax (go for syntax details). A typical usage may be to set this variable to Artifactory’s IP address.
[Default: true] If true, the configured password will be encrypted using Artifactory's before being stored. If false, the configured password will not be encrypted.
Read about additional environment variables at the page.
The enables you to point to a binary in your local file system and receive a report that contains a list of vulnerabilities, licenses, and policy violations for that binary prior to uploading the binary or build to Artifactory.
For more information see
For a full list of the package managers and build systems supported by the curation-audit command and the required Artifactory and Xray versions to use it please see
Some package types (except npm packages) require 'pass-through' curation configuration on the remote repositories in Artifactory, in addition to configuring curation on them. For more information, see .
When prompted for the access token, use the token generated from Artifactory. For more details, refer to the .
Set the resolved repository using the command inside the project directory.
Set the resolved repository using the command inside the project directory.
Set the resolved repository using the command inside the project directory (The only package installer supported for now by Python is "pip").
Set the resolved repository using the command inside the project directory.
This page describes how to use JFrog CLI with .
Read more about JFrog CLI .
This commands require version 2.0 or higher of.
This example uses . It creates the release bundle with name myApp and version 1.0.0. The release bundle will include all the zip files inside the zip folder, located at the root of the my-local-repo repository. The files will be distributed on the Edge Node to the target-zips folder, under the root of the my-target-repo repository. In addition, the distributed files will be renamed on the Edge Node, by adding -target to the name of each file.
To learn more, visit the .
These commands require version 2.0 or higher of.
These commands require version 2.0 or higher of.
These commands require version 2.0 or higher of .
This command also supports the following Advanced Scans with the Advanced Security Package enabled on the JFrog Platform instance. To enable the Advanced Security Package, contact us using form.
JFrog CLI is integrated with JFrog Xray and JFrog Artifactory, allowing you to have your build artifacts and dependencies scanned for vulnerabilities and license violations. Please notice that the build in the below example had already been published to Artifactory using the .
JFrog CLI integrates with any development ecosystem allowing you to collect build-info and then publish it to Artifactory. By publishing build-info to Artifactory, JFrog CLI empowers Artifactory to provide visibility into artifacts deployed, dependencies used and extensive information on the build environment to allow fully traceable builds. Read more about build-info and build integration with Artifactory .
Many of JFrog CLI's commands accept two optional command options: --build-name and --build-number. When these options are added, JFrog CLI collects and records the build info locally for these commands. When running multiple commands using the same build and build number, JFrog CLI aggregates the collected build info into one build. The recorded build-info can be later published to Artifactory using the command.
This command is used to in Artifactory.
This command is used to discard builds previously published to Artifactory using the command.
Install JFrog CLI on the primary node machine of the source instance as described .
If the source instance has internet access, you can install the data-transfer user plugin on the source machine automatically by running the following command from the terminal jf rt transfer-plugin-install source-server
. If however the source instance has no internet access, install the plugin manually as described .
Install JFrog CLI on any machine that has access to both the source and the target JFrog instances. To do this, follow the steps described .
If the traffic between the source and target instance needs to be routed through an HTTPS proxy, refer to section.
While the file transfer is running, monitor the load on your source instance, and if needed, reduce the transfer speed or increase it for better performance. For more information, see the section.
Read more about how the transfer files works in section.
Define the proxy details in the source instance UI as described in the .
This page describes how to use JFrog CLI with .
Release Lifecycle Management is available since .
The create command allows creating a Release Bundle v2 using . The file spec may be of one of the following creation sources:
For more information, see .
Note: If the machine running JFrog CLI has no access to the internet, make sure to read the section.
Note: If the machine running JFrog CLI has no access to the internet, make sure to read the section.
Create a remote Maven repository in Artifactory and name it extractors. When creating the repository, configure it to proxy
Make sure that this Artifactory server is known to JFrog CLI, using the command. If not, configure it using the command.
For detailed instructions, please refer to our on GitHub.
Make sure Artifactory can be used as docker registry. Please refer to in the JFrog Artifactory User Guide.
Check out our .
Running jf docker pull command allows pulling docker images from Artifactory, while collecting the build-info and storing it locally, so that it can be later published to Artifactory, using the command.
You can then publish the build-info collected by the jf docker pull command to Artifactory using the command.
You can then publish the build-info collected by the docker-push command to Artifactory using the command.
is a daemon-less container engine for developing, managing, and running OCI Containers. Running the podman-pull command allows pulling docker images from Artifactory using podman, while collecting the build-info and storing it locally, so that it can be later published to Artifactory, using the command.
You can then publish the build-info collected by the podman-pull command to Artifactory using the command.
is a daemon-less container engine for developing, managing, and running OCI Containers. After building your image, the podman-push command pushes the image layers to Artifactory, while collecting the build-info and storing it locally, so that it can be later published to Artifactory, using the build-publish command.
You can then publish the build-info collected by the podman-push command to Artifactory using the command.
JFrog CLI allows pushing containers to Artifactory using , while collecting build-info and storing it in Artifactory. For detailed instructions, please refer to our .
JFrog CLI allows pushing containers to Artifactory using , while collecting build-info and storing it in Artifactory. For detailed instructions, please refer to our .
JFrog CLI allows pushing containers to Artifactory using the , while collecting build-info and storing it in Artifactory. For detailed instructions, please refer to our .
You can then publish the build-info collected by the podman-push command to Artifactory using the command.
You can download npm packages from any npm repository type - local, remote or virtual, but you can only publish to a local or virtual Artifactory repository, containing local repositories. To publish to a virtual repository, you first need to set a default local repository. For more details, please refer to .
The following example installs the dependencies and records them locally as part of build my-build-name/1. The build-info can later be published to Artifactory using the command. The dependencies are resolved from the Artifactory server and repository configured by npm-config command.
To pack and publish the npm package and also record it locally as part of build my-build-name/1, run the following command. The build-info can later be published to Artifactory using the command. The package is published to the Artifactory server and repository configured by npm-config command.
The following example installs the dependencies and records them locally as part of build my-build-name/1. The build-info can later be published to Artifactory using the command. The dependencies are resolved from the Artifactory server and repository configured by **yarn-config command.
To help you get started, you can use .
The following example runs Go build command, while recording the build-info locally under build name my-build and build number 1. The build-info can later be published to Artifactory using the command.
To pack and publish the Go package and also record the build-info as part of build my-build-name/1 , run the following command. The build-info can later be published to Artifactory using the command. Before running this command on a project for the first time, the project should be configured using the jf go-config command.
To help you get started, you can use .
Running the install
command with both the no-cache-dir and force-reinstall pip options, should re-download and install these packages, and they will therefore be included in the build-info and added to the cache. It is also recommended to run the command from inside a .
To publish your NuGet packages to Artifactory, use the command.
In addition, record the build-info as part of build my-build-name/1. The build-info can later be published to Artifactory using the command.
We recommend using for an easy start up.
SCA
Software Composition Analysis for source code and binary files
Contextual Analysis
Deep Contextual Analysis combining real-world exploitability and CVEs applicability
Secrets
Secrets Detection for source code and binary files
Infrastructure as Code (IaC)
Identify security exposures in your IaC
SAST
Discover vulnerabilities in the 1st party code
Variable Name
Description
JFROG_CLI_MIN_CHECKSUM_DEPLOY_SIZE_KB
[Default: 10] Minimum file size in KB for which JFrog CLI performs checksum deploy optimization.
JFROG_CLI_RELEASES_REPO
Configured Artifactory repository name to download the jar needed by the mvn/gradle command. This environment variable's value format should be server ID configured by the 'jf c add' command. The repository should proxy https://releases.jfrog.io. This environment variable is used by the 'jf mvn' and 'jf gradle' commands, and also by the 'jf audit' command, when used for maven or gradle projects.
JFROG_CLI_DEPENDENCIES_DIR
[Default: $JFROG_CLI_HOME_DIR/dependencies] Defines the directory to which JFrog CLI's internal dependencies are downloaded.
JFROG_CLI_REPORT_USAGE
[Default: true] Set to false to block JFrog CLI from sending usage statistics to Artifactory.
JFROG_CLI_SERVER_ID
Server ID configured using the 'jf config' command, unless sent as a command argument or option.
JFROG_CLI_BUILD_NAME
Build name to be used by commands which expect a build name, unless sent as a command argument or option.
JFROG_CLI_BUILD_NUMBER
Build number to be used by commands which expect a build number, unless sent as a command argument or option.
JFROG_CLI_BUILD_PROJECT
JFrog project key to be used by commands that expect build name and build number. Determines the project of the published build.
JFROG_CLI_BUILD_URL
Sets the CI server build URL in the build-info. The "jf rt build-publish" command uses the value of this environment variable unless the --build-url command option is sent.
JFROG_CLI_ENV_EXCLUDE
[Default: password;secret;key;token] List of semicolon-separated(;) case insensitive patterns in the form of "value1;value2;...". Environment variables match those patterns will be excluded. This environment variable is used by the "jf rt build-publish" command, in case the --env-exclude command option is not sent.
JFROG_CLI_TRANSITIVE_DOWNLOAD
[Default: false] Set this option to true to include remote repositories in artifact searches when using the 'rt download' command. The search will target the first five remote repositories within the virtual repository. This feature is available starting from Artifactory version 7.17.0. NOTE: Enabling this option may increase the load on Artifactory instances that are proxied by multiple remote repositories..
JFROG_CLI_UPLOAD_EMPTY_ARCHIVE
[Default: false] Used by the "jf rt upload" command. Set to true if you'd like to upload an empty archive when '--archive' is set but all files were excluded by exclusions pattern.
Command name
scan
Abbreviation
s
Command options
--server-id
[Optional] Server ID configured using the jf c add command. If not specified, the default configured server is used.
--spec
[Optional] Path to a file specifying the files to scan. If the pattern argument is provided to the command, this option should not be provided.
--project
[Optional] JFrog project key, to enable Xray to determine security violations accordingly. The command accepts this option only if the --repo-path and --watches options are not provided. If none of the three options are provided, the command will show all known vulnerabilities.
--repo-path
[Optional] Artifactory repository path, to enable Xray to determine violations accordingly. The command accepts this option only if the --project and --watches options are not provided. If none of the three options are provided, the command will show all known vulnerabilities.
--watches
[Optional] A comma-separated(,) list of Xray watches, to enable Xray to determine violations accordingly. The command accepts this option only if the --project and --repo-path options are not provided. If none of the three options are provided, the command will show all known vulnerabilities.
--licenses
[Default: false] Set if you also require the list of licenses to be displayed.
--format=json
[Optional] Produces a JSON file containing the scan results.
--vuln
[Optional] Set if you'd like to receive all vulnerabilities, regardless of the policy configured in Xray.
Command arguments
Pattern
Specifies the local file system path to artifacts to be scanned. You can specify multiple files by using wildcards.
Command name
docker scan
Abbreviation
Command options
--server-id
[Optional] Server ID configured using the jf c add command. If not specified, the default configured server is used.
--project
[Optional] JFrog project key, to enable Xray to determine security violations accordingly. The command accepts this option only if the --repo-path and --watches options are not provided. If none of the three options are provided, the command will show all known vulnerabilities.
--repo-path
[Optional] Artifactory repository path, to enable Xray to determine violations accordingly. The command accepts this option only if the --project and --watches options are not provided. If none of the three options are provided, the command will show all known vulnerabilities.
--watches
[Optional] A comma-separated(,) list of Xray watches, to enable Xray to determine violations accordingly. The command accepts this option only if the --repo-path and --repo-path options are not provided. If none of the three options are provided, the command will show all known vulnerabilities.
--licenses
[Default: false] Set if you also require the list of licenses to be displayed.
--validate-secrets
[Default: false] Triggers token validation on found secrets
--format=json
[Optional] Produces a JSON file containing the scan results.
--vuln
[Optional] Set if you'd like to receive all vulnerabilities, regardless of the policy configured in Xray.
Command arguments
Pattern
Specifies the local file system path to artifacts to be scanned. You can specify multiple files by using wildcards.
Command name
curation-audit
Abbreviation
ca
Command options
--format
[Default: table] Defines the output format of the command. Acceptable values are: table and json.
--working-dirs
[Optional] A comma separated list of relative working directories, to determine the audit targets locations.
--threads
[Default: 3] The number of parallel threads used to determine the curation status for each package in the project tree.
--requirements-file
[Optional] [Pip] Defines pip requirements file name. For example: 'requirements.txt'
Command-name
release-bundle-create / release-bundle-update
Abbreviation
rbc / rbu
Command options:
--server-id
[Optional] Artifactory Server ID configured using the 'jf config' command.
--spec
[Optional] Path to a file spec. For more details, please refer to Using File Specs.
--spec-vars
[Optional] List of semicolon-separated(;) variables in the form of "key1=value1;key2=value2;..." to be replaced in the File Spec. In the File Spec, the variables should be used as follows: ${key1}.
--target-props
[Optional] The list of properties, in the form of key1=value1;key2=value2,..., to be added to the artifacts after distribution of the release bundle.
--target
[Optional] The target path for distributed artifacts on the edge node. If not specified, the artifacts will have the same path and name on the edge node, as on the source Artifactory server. For flexibility in specifying the distribution path, you can include placeholders in the form of {1}, {2} which are replaced by corresponding tokens in the pattern path that are enclosed in parenthesis.
--dry-run
[Default: false] Set to true to disable communication with JFrog Distribution.
--sign
[Default: false] If set to true, automatically signs the release bundle version.
--passphrase
[Optional] The passphrase for the signing key.
--desc
[Optional] Description of the release bundle.
--release-notes-path
[Optional] Path to a file describes the release notes for the release bundle version.
--release-notes-syntax
[Default: plain_text] The syntax for the release notes. Can be one of markdown, asciidoc, or plain_text.
--exclusions
[Optional] A list of semicolon-separated(;) exclude path patterns, to be excluded from the Release Bundle. Allows using wildcards.
--repo
[Optional] A repository name at source Artifactory to store release bundle artifacts in. If not provided, Artifactory will use the default one.
--insecure-tls
[Default: false] Set to true to skip TLS certificates verification.
--detailed-summary
[Default: false] Set to true to return the SHA256 value of the release bundle manifest.
Command arguments:
release bundle name
The name of the release bundle.
release bundle version
The release bundle version.
pattern
Specifies the source path in Artifactory, from which the artifacts should be bundled, in the following format: <repository name>/<repository path>. You can use wildcards to specify multiple artifacts. This argument should not be sent along with the --spec option.
Command-name
release-bundle-sign
Abbreviation
rbs
Command options:
--server-id
[Optional] Artifactory Server ID configured using the 'jf config' command.
--passphrase
[Optional] The passphrase for the signing key.
--repo
[Optional] A repository name at source Artifactory to store release bundle artifacts in. If not provided, Artifactory will use the default one.
--insecure-tls
[Default: false] Set to true to skip TLS certificates verification.
--detailed-summary
[Default: false] Set to true to return the SHA256 value of the release bundle manifest.
Command arguments:
release bundle name
The name of the release bundle.
release bundle version
The release bundle version.
Command-name
release-bundle-distribute
Abbreviation
rbd
Command options:
--server-id
[Optional] Artifactory Server ID configured using the 'jf config' command.
--sync
[Default: false] Set to true to enable sync distribution (the command execution will end when the distribution process ends).
--max-wait-minutes
[Default: 60] Max minutes to wait for sync distribution.
--create-repo
[Default: false] Set to true to create the repository on the edge if it does not exist.
--dry-run
[Default: false] Set to true to disable communication with JFrog Distribution.
--dist-rules
[Optional] Path to a file, which includes the Distribution Rules in a JSON format. Distribution Rules JSON structure { "distribution_rules": [ { "site_name": "DC-1", "city_name": "New-York", "country_codes": ["1"] }, { "site_name": "DC-2", "city_name": "Tel-Aviv", "country_codes": ["972"] } ] } The Distribution Rules format also supports wildcards. For example: { "distribution_rules": [ { "site_name": "", "city_name": "", "country_codes": ["*"] } ] }
--site
[Default: *] Wildcard filter for site name.
--city
[Default: *] Wildcard filter for site city name.
--country-codes
[Default: *] semicolon-separated(;) list of wildcard filters for site country codes.
--insecure-tls
[Default: false] Set to true to skip TLS certificates verification.
Command arguments:
release bundle name
The name of the release bundle.
release bundle version
The release bundle version.
Command-name
release-bundle-delete
Abbreviation
rbdel
Command options:
--server-id
[Optional] Artifactory Server ID configured using the 'jf config' command.
--sync
[Default: false] Set to true to enable sync deletion (the command execution will end when the deletion process ends).
--max-wait-minutes
[Default: 60] Max minutes to wait for sync deletion.
--dry-run
[Default: false] Set to true to disable communication with JFrog Distribution.
--dist-rules
[Optional] Path to a file, which includes the distribution rules in a JSON format.
--site
[Default: *] Wildcard filter for site name.
--city
[Default: *] Wildcard filter for site city name.
--country-codes
[Default: *] semicolon-separated(;) list of wildcard filters for site country codes.
--delete-from-dist
[Default: false] Set to true to delete release bundle version in JFrog Distribution itself after deletion is complete in the specified Edge nodes.
--quiet
[Default: false] Set to true to skip the delete confirmation message.
--insecure-tls
[Default: false] Set to true to skip TLS certificates verification.
Command arguments:
release bundle name
The name of the release bundle.
release bundle version
The release bundle version.
Command name
xr offline-update
Abbreviation
xr ou
Command options
--license-id
[Mandatory] Xray license ID.
--from
[Optional] From update date in YYYY-MM-DD format.
--to
[Optional] To update date in YYYY-MM-DD format.
--version
[Optional] Xray API version.
--target
[Default: ./] Path for downloaded update files.
--stream
[Default: false] Set to true to use Xray DBSync V3 stream, Possible values are: public_data, exposures and contextual_analysis.
--periodic
[Default: false] Set to true to get the Xray DBSync V3 Periodic Package (Use with stream flag).
Command arguments
The command accepts no arguments.
Command name
rt upload
Abbreviation
rt u
Command arguments:
The command takes two arguments, source path and target path. In case the --spec option is used, the commands accept no arguments.
Source path
The first argument specifies the local file system path to artifacts that should be uploaded to Artifactory. You can specify multiple artifacts by using wildcards or a regular expression as designated by the --regexp command option. Please read the --regexp option description for more information.
Target path
The second argument specifies the target path in Artifactory in the following format: [repository name]/[repository path]
If the target path ends with a slash, the path is assumed to be a folder. For example, if you specify the target as "repo-name/a/b/", then "b" is assumed to be a folder in Artifactory into which files should be uploaded. If there is no terminal slash, the target path is assumed to be a file to which the uploaded file should be renamed. For example, if you specify the target as "repo-name/a/b", the uploaded file is renamed to "b" in Artifactory.
For flexibility in specifying the upload path, you can include placeholders in the form of {1}, {2} which are replaced by corresponding tokens in the source path that are enclosed in parenthesis. For more details, please refer to Using Placeholders.
Command options:
When using the * or ; characters in the upload command options or arguments, make sure to wrap the whole options or arguments string in quotes (") to make sure the * or ; characters are not interpreted as literals.
--archive
[Optional] Set to "zip" to pack and deploy the files to Artifactory inside a ZIP archive. Currently, the only packaging format supported is zip.
--server-id
[Optional] Server ID configured using the jf c add command. If not specified, the default configured Artifactory server is used.
--spec
[Optional] Path to a file spec. For more details, please refer to Using File Specs.
--spec-vars
[Optional] List of semicolon-separated(;) variables in the form of "key1=value1;key2=value2;..." to be replaced in the File Spec. In the File Spec, the variables should be used as follows: ${key1}.
--build-name
[Optional] Build name. For more details, please refer to Build Integration.
--build-number
[Optional] Build number. For more details, please refer to Build Integration.
--project
[Optional] JFrog project key.
--module
[Optional] Optional module name for the build-info.
--target-props
[Optional] List of semicolon-separated(;) Artifactory properties specified as "key=value" pairs to be attached to the uploaded files.(for example: "key1=value1;key2=value21,value22;key3=value3".
--deb
[Optional] Used for Debian packages only. Specifies the distribution/component/architecture of the package. If the the value for distribution, component or architecture include a slash. the slash should be escaped with a back-slash.
--flat
[Default: false] If true, files are uploaded to the exact target path specified and their hierarchy in the source file system is ignored. If false, files are uploaded to the target path while maintaining their file system hierarchy. If Using Placeholders are used, the value of this option is ignored. Note JFrog CLI v1 In JFrog CLI v1, the default value of the --flat option is true.
--recursive
[Default: true] If true, files are also collected from sub-folders of the source directory for upload . If false, only files specifically in the source directory are uploaded.
--regexp
[Default: false] If true, the command will interpret the first argument, which describes the local file-system path of artifacts to upload, as a regular expression. If false, it will interpret the first argument as a wild-card expression. The above also applies for the --exclusions option. If you have specified that you are using regular expressions, then the beginning of the expression must be enclosed in parenthesis. For example: a/b/c/(.*)/file.zip
--ant
[Default: false] If true, the command will interpret the first argument, which describes the local file-system path of artifacts to upload, as an ANT pattern. If false, it will interpret the first argument as a wildcards expression. The above also applies for the --exclusions option.
--threads
[Default: 3] The number of parallel threads that should be used to upload where each thread uploads a single artifact at a time.
--dry-run
[Default: false] If true, the command only indicates which artifacts would have been uploaded If false, the command is fully executed and uploads artifacts as specified
--symlinks
[Default: false]
If true, the command will preserve the soft links structure in Artifactory. The symlink
file representation will contain the symbolic link and checksum properties.
--explode
[Default: false] If true, the command will extract an archive containing multiple artifacts after it is deployed to Artifactory, while maintaining the archive's file structure.
--include-dirs
[Default: false] If true, the source path applies to bottom-chain directories and not only to files. Bottom-chain directories are either empty or do not include other directories that match the source path.
--exclusions
[Optional] A list of semicolon-separated(;) exclude patterns. Allows using wildcards, regular expressions or ANT patterns, according to the value of the --regexp and --ant options. Please read the --regexp and --ant options description for more information.
--sync-deletes
[Optional] Specific path in Artifactory, under which to sync artifacts after the upload. After the upload, this path will include only the artifacts uploaded during this upload operation. The other files under this path will be deleted.
--quiet
[Default: false] If true, the delete confirmation message is skipped.
--fail-no-op
[Default: false] Set to true if you'd like the command to return exit code 2 in case of no files are affected.
--retries
[Default: 3] Number of upload retries.
--retry-wait-time
[Default: 0s] Number of seconds or milliseconds to wait between retries. The numeric value should either end with s for seconds or ms for milliseconds (for example: 10s or 100ms).
--detailed-summary
[Default: false] Set to true to include a list of the affected files as part of the command output summary.
--insecure-tls
[Default: false] Set to true to skip TLS certificates verification.
--chunk-size
[Default: 20] The upload chunk size in MiB that can be concurrently uploaded during a multi-part upload. This option, as well as the functionality of multi-part upload, requires Artifactory with S3 or GCP storage.
--min-split
[Default: 200] The minimum file size in MiB required to attempt a multi-part upload. This option, as well as the functionality of multi-part upload, requires Artifactory with S3 or GCP storage.
--split-count
[Default: 5] The maximum number of parts that can be concurrently uploaded per file during a multi-part upload. Set to 0 to disable multi-part upload. This option, as well as the functionality of multi-part upload, requires Artifactory with S3 or GCP storage.
Command name
rt download
Abbreviation
rt dl
Command arguments:
The command takes two arguments source path and target path (Optional). In case the --spec option is used, the commands accept no arguments.
Source path
Specifies the source path in Artifactory, from which the artifacts should be downloaded. You can use wildcards to specify multiple artifacts.
Target path
The second argument is optional and specifies the local file system target path. If the target path ends with a slash, the path is assumed to be a directory. For example, if you specify the target as "repo-name/a/b/", then "b" is assumed to be a directory into which files should be downloaded. If there is no terminal slash, the target path is assumed to be a file to which the downloaded file should be renamed. For example, if you specify the target as "a/b", the downloaded file is renamed to "b". For flexibility in specifying the target path, you can include placeholders in the form of {1}, {2} which are replaced by corresponding tokens in the source path that are enclosed in parenthesis. For more details, please refer to Using Placeholders.
Command options:
When using the * or ; characters in the download command options or arguments, make sure to wrap the whole options or arguments string in quotes (") to make sure the * or ; characters are not interpreted as literals.
--server-id
[Optional] Server ID configured using the 'jf config' command. If not specified, the default configured Artifactory server is used.
--build-name
[Optional] Build name. For more details, please refer to Build Integration.
--build-number
[Optional] Build number. For more details, please refer to Build Integration.
--project
[Optional] JFrog project key.
--module
[Optional] Optional module name for the build-info.
--spec
[Optional] Path to a file spec. For more details, please refer to Using File Specs.
--spec-vars
[Optional] List of semicolon-separated(;) variables in the form of "key1=value1;key2=value2;..." to be replaced in the File Spec. In the File Spec, the variables should be used as follows: ${key1}.
--props
[Optional] A list of Artifactory properties specified as semicolon-separated(;) "key=value" pairs (for example: "key1=value1;key2=value2;key3=value3"). Only artifacts with all of the specified properties names and values will be downloaded.
--exclude-props
[Optional] A list of Artifactory properties specified as semicolon-separated(;) "key=value" pairs (for example: "key1=value1;key2=value2;key3=value3"). Only artifacts without all of the specified properties names and values will be downloaded.
--build
[Optional] If specified, only artifacts of the specified build are matched. The property format is build-name/build-number. If you do not specify the build number, the artifacts are filtered by the latest build number.
--bundle
[Optional] If specified, only artifacts of the specified Release Bundle (v1 or v2) are matched. The value format is bundle-name/bundle-version. If Release Bundles with the same name and version exist for both v1 and v2, the contents of the Release Bundle v2 version are downloaded.
--flat
[Default: false] If true, artifacts are downloaded to the exact target path specified and their hierarchy in the source repository is ignored. If false, artifacts are downloaded to the target path in the file system while maintaining their hierarchy in the source repository. If Using Placeholders are used, and you would like the local file system (download path) to be determined by placeholders only, or in other words, avoid concatenating the Artifactory folder hierarchy local, set to false.
--recursive
[Default: true] If true, artifacts are also downloaded from sub-paths under the specified path in the source repository. If false, only artifacts in the specified source path directory are downloaded.
--threads
[Default: 3] The number of parallel threads that should be used to download where each thread downloads a single artifact at a time.
--split-count
[Default: 3]
The number of segments into which each file should be split for download (provided the artifact is over --min-split
in size). To download each file in a single thread, set to 0.
--retries
[Default: 3] Number of download retries.
--retry-wait-time
[Default: 0s] Number of seconds or milliseconds to wait between retries. The numeric value should either end with s for seconds or ms for milliseconds (for example: 10s or 100ms).
--min-split
[Default: 5120]
The minimum size permitted for splitting. Files larger than the specified number will be split into equally sized --split-count
segments. Any files smaller than the specified number will be downloaded in a single thread. If set to -1, files are not split.
--dry-run
[Default: false] If true, the command only indicates which artifacts would have been downloaded. If false, the command is fully executed and downloads artifacts as specified.
--explode
[Default: false] Set to true to extract an archive after it is downloaded from Artifactory. Supported compression formats: br, bz2, gz, lz4, sz, xz, zstd. Supported archive formats: zip, tar (including any compressed variants like tar.gz), rar.
--bypass-archive-inspection
[Default: false]
Set to true to bypass the archive security inspection before it is unarchived. Used with the explode
option.
'--validate-symlinks'
[Default: false] If true, the command will validate that symlinks are pointing to existing and unchanged files, by comparing their sha1. Applicable to files and not directories.
--include-dirs
[Default: false] If true, the source path applies to bottom-chain directories and not only to files. Bottom-chain directories are either empty or do not include other directories that match the source path.
--exclusions
A list of semicolon-separated(;) exclude patterns. Allows using wildcards.
--sync-deletes
[Optional] Specific path in the local file system, under which to sync dependencies after the download. After the download, this path will include only the dependencies downloaded during this download operation. The other files under this path will be deleted.
--quiet
[Default: false] If true, the delete confirmation message is skipped.
--sort-by
[Optional]
A list of semicolon-separated(;) fields to sort by. The fields must be part of the items
AQL domain. For more information read the AQL documentation
--sort-order
[Default: asc]
The order by which fields in the 'sort-by' option should be sorted. Accepts asc
or desc
.
--limit
[Optional] The maximum number of items to fetch. Usually used with the 'sort-by' option.
--offset
[Optional] The offset from which to fetch items (i.e. how many items should be skipped). Usually used with the 'sort-by' option.
--fail-no-op
[Default: false] Set to true if you'd like the command to return exit code 2 in case of no files are affected.
--archive-entries
[Optional] This option is no longer supported since version 7.90.5 of Artifactory. If specified, only archive artifacts containing entries matching this pattern are matched. You can use wildcards to specify multiple artifacts.
--detailed-summary
[Default: false] Set to true to include a list of the affected files as part of the command output summary.
--insecure-tls
[Default: false] Set to true to skip TLS certificates verification.
--gpg-key
[Optional] Path to the public GPG key file located on the file system, used to validate downloaded release bundle files.
Command name
rt copy
Abbreviation
rt cp
Command arguments:
The command takes two arguments source path and target path. In case the --spec option is used, the commands accept no arguments.
Source path
Specifies the source path in Artifactory, from which the artifacts should be copied, in the following format: [repository name]/[repository path].
You can use wildcards to specify multiple artifacts.
Target path
Specifies the target path in Artifactory, to which the artifacts should be copied, in the following format: [repository name]/[repository path]
By default the Target Path maintains the source path hierarchy, see --flat flag for more info. If the pattern ends with a slash, the target path is assumed to be a folder. For example, if you specify the target as "repo-name/a/b/", then "b" is assumed to be a folder in Artifactory into which files should be copied. If there is no terminal slash, the target path is assumed to be a file to which the copied file should be renamed. For example, if you specify the target as "repo-name/a/b", the copied file is renamed to "b" in Artifactory.
For flexibility in specifying the target path, you can include placeholders in the form of {1}, {2} which are replaced by corresponding tokens in the source path that are enclosed in parenthesis. For more details, please refer to Using Placeholders.
Command options:
When using the * or ; characters in the copy command options or arguments, make sure to wrap the whole options or arguments string in quotes (") to make sure the * or ; characters are not interpreted as literals.
--server-id
[Optional] Server ID configured using the 'jf config' command. If not specified, the default configured Artifactory server is used.
--spec
[Optional] Path to a file spec. For more details, please refer to Using File Specs.
--props
[Optional] A list of Artifactory properties specified as semicolon-separated(;) "key=value" pairs. (for example: "key1=value1;key2=value2;key3=value3"). Only artifacts with these properties names and values will be copied.
--exclude-props
[Optional] A list of Artifactory properties specified as semicolon-separated(;) "key=value" pairs (for example: "key1=value1;key2=value2;key3=value3"). Only artifacts without all of the specified properties names and values will be copied.
--build
[Optional] If specified, only artifacts of the specified build are matched. The property format is build-name/build-number. If you do not specify the build number, the artifacts are filtered by the latest build number.
--bundle
[Optional] If specified, only artifacts of the specified bundle are matched. The value format is bundle-name/bundle-version.
--flat
[Default: false] If true, artifacts are copied to the exact target path specified and their hierarchy in the source path is ignored. If false, artifacts are copied to the target path while maintaining their source path hierarchy.
--recursive
[Default: true] If true, artifacts are also copied from sub-paths under the specified source path. If false, only artifacts in the specified source path directory are copied.
--dry-run
[Default: false] If true, the command only indicates which artifacts would have been copied. If false, the command is fully executed and copies artifacts as specified.
--exclusions
A list of semicolon-separated(;) exclude patterns. Allows using wildcards.
--threads
[Default: 3] Number of threads used for copying the items.
--sort-by
[Optional]
A list of semicolon-separated(;) fields to sort by. The fields must be part of the items
AQL domain. For more information read the AQL documentation
--sort-order
[Default: asc]
The order by which fields in the 'sort-by' option should be sorted. Accepts asc
or desc
.
--limit
[Optional] The maximum number of items to fetch. Usually used with the 'sort-by' option.
--offset
[Optional] The offset from which to fetch items (i.e. how many items should be skipped). Usually used with the 'sort-by' option.
--fail-no-op
[Default: false] Set to true if you'd like the command to return exit code 2 in case of no files are affected.
--archive-entries
[Optional] This option is no longer supported since version 7.90.5 of Artifactory. If specified, only archive artifacts containing entries matching this pattern are matched. You can use wildcards to specify multiple artifacts.
--insecure-tls
[Default: false] Set to true to skip TLS certificates verification.
--retries
[Default: 3] Number for HTTP retry attempts.
--retry-wait-time
[Default: 0s] Number of seconds or milliseconds to wait between retries. The numeric value should either end with s for seconds or ms for milliseconds (for example: 10s or 100ms).
Command name
rt move
Abbreviation
rt mv
Command arguments:
The command takes two arguments source path and target path. In case the --spec option is used, the commands accept no arguments.
Source path
Specifies the source path in Artifactory, from which the artifacts should be moved, in the following format: [repository name]/[repository path].
You can use wildcards to specify multiple artifacts.
Target path
Specifies the target path in Artifactory, to which the artifacts should be moved, in the following format: [repository name]/[repository path]
By default the Target Path maintains the source path hierarchy, see --flat flag for more info. If the pattern ends with a slash, the target path is assumed to be a folder. For example, if you specify the target as "repo-name/a/b/", then "b" is assumed to be a folder in Artifactory into which files should be moved. If there is no terminal slash, the target path is assumed to be a file to which the moved file should be renamed. For example, if you specify the target as "repo-name/a/b", the moved file is renamed to "b" in Artifactory.
For flexibility in specifying the upload path, you can include placeholders in the form of {1}, {2} which are replaced by corresponding tokens in the source path that are enclosed in parenthesis. For more details, please refer to Using Placeholders.
Command options:
When using the * or ; characters in the copy command options or arguments, make sure to wrap the whole options or arguments string in quotes (") to make sure the * or ; characters are not interpreted as literals.
--server-id
[Optional] Server ID configured using the 'jf config' command. If not specified, the default configured Artifactory server is used.
--spec
[Optional] Path to a file spec. For more details, please refer to Using File Specs.
--props
[Optional] A list of Artifactory properties specified as semicolon-separated(;) "key=value" pairs (for example: "key1=value1;key2=value2;key3=value3"). Only artifacts with these properties names and values will be moved.
--exclude-props
[Optional] A list of Artifactory properties specified as semicolon-separated(;) "key=value" pairs (for example: "key1=value1;key2=value2;key3=value3"). Only artifacts without all of the specified properties names and values will be moved.
--build
[Optional] If specified, only artifacts of the specified build are matched. The property format is build-name/build-number. If you do not specify the build number, the artifacts are filtered by the latest build number.
--bundle
[Optional] If specified, only artifacts of the specified bundle are matched. The value format is bundle-name/bundle-version.
--flat
[Default: false] If true, artifacts are moved to the exact target path specified and their hierarchy in the source path is ignored. If false, artifacts are moved to the target path while maintaining their source path hierarchy.
--recursive
[Default: true] If true, artifacts are also moved from sub-paths under the specified source path. If false, only artifacts in the specified source path directory are moved.
--dry-run
[Default: false] If true, the command only indicates which artifacts would have been moved. If false, the command is fully executed and downloads artifacts as specified.
--exclusions
A list of semicolon-separated(;) exclude patterns. Allows using wildcards.
--threads
[Default: 3] Number of threads used for moving the items.
--sort-by
[Optional]
A list of semicolon-separated(;) fields to sort by. The fields must be part of the items
AQL domain. For more information read the AQL documentation
--sort-order
[Default: asc]
The order by which fields in the 'sort-by' option should be sorted. Accepts asc
or desc
.
--limit
[Optional] The maximum number of items to fetch. Usually used with the 'sort-by' option.
--offset
[Optional] The offset from which to fetch items (i.e. how many items should be skipped). Usually used with the 'sort-by' option.
--fail-no-op
[Default: false] Set to true if you'd like the command to return exit code 2 in case of no files are affected.
--archive-entries
[Optional] This option is no longer supported since version 7.90.5 of Artifactory. If specified, only archive artifacts containing entries matching this pattern are matched. You can use wildcards to specify multiple artifacts.
--insecure-tls
[Default: false] Set to true to skip TLS certificates verification.
--retries
[Default: 3] Number of HTTP retry attempts.
--retry-wait-time
[Default: 0s] Number of seconds or milliseconds to wait between retries. The numeric value should either end with s for seconds or ms for milliseconds (for example: 10s or 100ms).
Command name
rt delete
Abbreviation
rt del
Command arguments:
The command takes one argument which is the delete path. In case the --spec option is used, the commands accept no arguments.
Delete path
Specifies the path in Artifactory of the files that should be deleted in the following format: [repository name]/[repository path].
You can use wildcards to specify multiple artifacts.
Command options:
When using the * or ; characters in the delete command options or arguments, make sure to wrap the whole options or arguments string in quotes (") to make sure the * or ; characters are not interpreted as literals.
--server-id
[Optional] Server ID configured using the 'jf config' command. If not specified, the default configured Artifactory server is used.
--spec
[Optional] Path to a file spec. For more details, please refer to Using File Specs.
--props
[Optional] A list of Artifactory properties specified as semicolon-separated(;) "key=value" pairs (for example: "key1=value1;key2=value2;key3=value3"). Only artifacts with these properties names and values will be deleted.
--exclude-props
[Optional] A list of Artifactory properties specified as semicolon-separated(;) "key=value" pairs (for example: "key1=value1;key2=value2;key3=value3"). Only artifacts without all of the specified properties names and values will be deleted.
--build
[Optional] If specified, only artifacts of the specified build are matched. The property format is build-name/build-number. If you do not specify the build number, the artifacts are filtered by the latest build number.
--bundle
[Optional] If specified, only artifacts of the specified bundle are matched. The value format is bundle-name/bundle-version.
--recursive
[Default: true] If true, artifacts are also deleted from sub-paths under the specified path.
--quiet
[Default: false] If true, the delete confirmation message is skipped.
--dry-run
[Default: false] If true, the command only indicates which artifacts would have been deleted. If false, the command is fully executed and deletes artifacts as specified.
--exclusions
A list of semicolon-separated(;) exclude patterns. Allows using wildcards.
--sort-by
[Optional]
A list of semicolon-separated(;) fields to sort by. The fields must be part of the items
AQL domain. For more information read the AQL documentation
--sort-order
[Default: asc]
The order by which fields in the 'sort-by' option should be sorted. Accepts asc
or desc
.
--limit
[Optional] The maximum number of items to fetch. Usually used with the 'sort-by' option.
--offset
[Optional] The offset from which to fetch items (i.e. how many items should be skipped). Usually used with the 'sort-by' option.
--fail-no-op
[Default: false] Set to true if you'd like the command to return exit code 2 in case of no files are affected.
--archive-entries
[Optional] This option is no longer supported since version 7.90.5 of Artifactory. If specified, only archive artifacts containing entries matching this pattern are matched. You can use wildcards to specify multiple artifacts.
--threads
[Default: 3] Number of threads used for deleting the items.
--insecure-tls
[Default: false] Set to true to skip TLS certificates verification.
--retries
[Default: 3] Number of HTTP retry attempts.
--retry-wait-time
[Default: 0s] Number of seconds or milliseconds to wait between retries. The numeric value should either end with s for seconds or ms for milliseconds (for example: 10s or 100ms).
Command name
rt search
Abbreviation
rt s
Command arguments:
The command takes one argument which is the search path. In case the --spec option is used, the commands accept no arguments.
Search path
Specifies the search path in Artifactory, in the following format: [repository name]/[repository path].
You can use wildcards to specify multiple artifacts.
Command options:
When using the * or ; characters in the command options or arguments, make sure to wrap the whole options or arguments string in quotes (") to make sure the * or ; characters are not interpreted as literals.
--server-id
[Optional] Server ID configured using the 'jf config' command. If not specified, the default configured Artifactory server is used.
--spec
[Optional] Path to a file spec. For more details, please refer to Using File Specs.
--count
[Optional] Set to true to display only the total of files or folders found.
--include-dirs
[Default: false] Set to true if you'd like to also apply the source path pattern for directories and not only for files
--spec-vars
[Optional] List of semicolon-separated(;) variables in the form of "key1=value1;key2=value2;..." to be replaced in the File Spec. In the File Spec, the variables should be used as follows: ${key1}.
--props
[Optional] A list of Artifactory properties specified as semicolon-separated(;) "key=value" pairs (for example: "key1=value1;key2=value2;key3=value3"). Only artifacts with these properties names and values will be returned.
--exclude-props
[Optional] A list of Artifactory properties specified as semicolon-separated(;) "key=value" pairs (for example: "key1=value1;key2=value2;key3=value3"). Only artifacts without all of the specified properties names and values will be returned.
--build
[Optional] If specified, only artifacts of the specified build are matched. The property format is build-name/build-number. If you do not specify the build number, the artifacts are filtered by the latest build number.
--bundle
[Optional] If specified, only artifacts of the specified bundle are matched. The value format is bundle-name/bundle-version.
--recursive
[Default: true] Set to false if you do not wish to search artifacts inside sub-folders in Artifactory.
--exclusions
A list of semicolon-separated(;) exclude patterns. Allows using wildcards.
--sort-by
[Optional]
A list of semicolon-separated(;) fields to sort by. The fields must be part of the items
AQL domain. For more information read the AQL documentation
--sort-order
[Default: asc]
The order by which fields in the 'sort-by' option should be sorted. Accepts asc
or desc
.
--transitive
[Optional] Set to true to look for artifacts also in remote repositories. Available on Artifactory version 7.17.0 or higher.
--limit
[Optional] The maximum number of items to fetch. Usually used with the 'sort-by' option.
--offset
[Optional] The offset from which to fetch items (i.e. how many items should be skipped). Usually used with the 'sort-by' option.
--fail-no-op
[Default: false] Set to true if you'd like the command to return exit code 2 in case of no files are affected.
--archive-entries
[Optional] This option is no longer supported since version 7.90.5 of Artifactory. If specified, only archive artifacts containing entries matching this pattern are matched. You can use wildcards to specify multiple artifacts.
--insecure-tls
[Default: false] Set to true to skip TLS certificates verification.
--retries
[Default: 3] Number of HTTP retry attempts.
--retry-wait-time
[Default: 0s] Number of seconds or milliseconds to wait between retries. The numeric value should either end with s for seconds or ms for milliseconds (for example: 10s or 100ms).
--include
[Optional]
List of semicolon-separated(;) fields in the form of "value1;value2;...".
Only the path and the fields that are specified will be returned. The fields must be part of the items
AQL domain. for the full supported items list check AQL documentation
Command name
rt set-props
Abbreviation
rt sp
Command arguments:
The command takes two arguments, files pattern and files properties. In case the --spec option is used, the commands accept no arguments.
Files pattern
Files that match the pattern will be set with the specified properties.
Files properties
A list of semicolon-separated(;) key-values in the form of key1=value1;key2=value2,..., to be set on the matching files.
Command options:
When using the * or ; characters in the command options or arguments, make sure to wrap the whole options or arguments string in quotes (") to make sure the * or ; characters are not interpreted as literals.
--server-id
[Optional] Server ID configured using the 'jf config' command. If not specified, the default configured Artifactory server is used.
--spec
[Optional] Path to a file spec. For more details, please refer to Using File Specs.
--spec-vars
[Optional] List of semicolon-separated(;) variables in the form of "key1=value1;key2=value2;..." to be replaced in the File Spec. In the File Spec, the variables should be used as follows: ${key1}.
--props
[Optional] List of semicolon-separated(;) properties in the form of "key1=value1;key2=value2,...". Only files with these properties names and values are affected.
--exclude-props
[Optional] A list of Artifactory properties specified as semicolon-separated(;) "key=value" pairs (for example: "key1=value1;key2=value2;key3=value3"). Only artifacts without all of the specified properties names and values will be affected.
--recursive
[Default: true] When false, artifacts inside sub-folders in Artifactory will not be affected.
--build
[Optional] If specified, only artifacts of the specified build are matched. The property format is build-name/build-number. If you do not specify the build number, the artifacts are filtered by the latest build number.
--bundle
[Optional] If specified, only artifacts of the specified bundle are matched. The value format is bundle-name/bundle-version.
--include-dirs
[Default: false] When true, the properties will also be set on folders (and not just files) in Artifactory.
--fail-no-op
[Default: false] Set to true if you'd like the command to return exit code 2 in case of no files are affected.
--exclusions
A list of semicolon-separated(;) exclude patterns. Allows using wildcards.
--sort-by
[Optional]
A list of semicolon-separated(;) fields to sort by. The fields must be part of the items
AQL domain. For more information read the AQL documentation
--sort-order
[Default: asc]
The order by which fields in the 'sort-by' option should be sorted. Accepts asc
or desc
.
--limit
[Optional] The maximum number of items to fetch. Usually used with the 'sort-by' option.
--offset
[Optional] The offset from which to fetch items (i.e. how many items should be skipped). Usually used with the 'sort-by' option.
--archive-entries
[Optional] This option is no longer supported since version 7.90.5 of Artifactory. If specified, only archive artifacts containing entries matching this pattern are matched. You can use wildcards to specify multiple artifacts.
--insecure-tls
[Default: false] Set to true to skip TLS certificates verification.
--threads
[Default: 3] Number of working threads.
--retries
[Default: 3] Number of HTTP retry attempts.
--retry-wait-time
[Default: 0s] Number of seconds or milliseconds to wait between retries. The numeric value should either end with s for seconds or ms for milliseconds (for example: 10s or 100ms).
Command name
rt delete-props
Abbreviation
rt delp
Command arguments:
The command takes two arguments, files pattern and properties list. In case the --spec option is used, the commands accept no arguments.
Files pattern
Specifies the files pattern in the following format: [repository name]/[repository path].
You can use wildcards to specify multiple repositories and files.
Properties list
A comma-separated(,) list of properties, in the form of key1,key2,..., to be deleted from the matching files.
Command options:
When using the * or ; characters in the command options or arguments, make sure to wrap the whole options or arguments string in quotes (") to make sure the * or ; characters are not interpreted as literals.
--server-id
[Optional] Artifactory Server ID configured using the 'jf config' command. If not specified, the default configured Artifactory server is used.
--props
[Optional] List of semicolon-separated(;) properties in the form of "key1=value1;key2=value2,...". Only files with these properties are affected.
--exclude-props
[Optional] List of semicolon-separated(;) Artifactory properties specified as "key=value" (for example: "key1=value1;key2=value2;key3=value3"). Only artifacts without all of the specified properties names and values will be affected.
--recursive
[Default: true] When false, artifacts inside sub-folders in Artifactory will not be affected.
--build
[Optional] If specified, only artifacts of the specified build are matched. The property format is build-name/build-number. If you do not specify the build number, the artifacts are filtered by the latest build number.
--bundle
[Optional] If specified, only artifacts of the specified bundle are matched. The value format is bundle-name/bundle-version.
--include-dirs
[Default: false] When true, the properties will also be set on folders (and not just files) in Artifactory.
--fail-no-op
[Default: false] Set to true if you'd like the command to return exit code 2 in case of no files are affected.
--exclusions
List of semicolon-separated(;) exclude patterns. Allows using wildcards.
--sort-by
[Optional]
A list of semicolon-separated(;) fields to sort by. The fields must be part of the items
AQL domain. For more information read the AQL documentation
--sort-order
[Default: asc]
The order by which fields in the 'sort-by' option should be sorted. Accepts asc
or desc
.
--limit
[Optional] The maximum number of items to fetch. Usually used with the 'sort-by' option.
--offset
[Optional] The offset from which to fetch items (i.e. how many items should be skipped). Usually used with the 'sort-by' option.
--archive-entries
[Optional] This option is no longer supported since version 7.90.5 of Artifactory. If specified, only archive artifacts containing entries matching this pattern are matched. You can use wildcards to specify multiple artifacts.
--insecure-tls
[Default: false] Set to true to skip TLS certificates verification.
--retries
[Default: 3] Number of HTTP retry attempts.
--retry-wait-time
[Default: 0s] Number of seconds or milliseconds to wait between retries. The numeric value should either end with s for seconds or ms for milliseconds (for example: 10s or 100ms).
Command name
Default
Description
audit, aud
Executes all following sub-scans: SCA (Software Composition Analysis) + Contextual Analysis, SAST, IaC, and Secrets.
Command options
--help
Displays information about each of the jf audit
command's options.
--server-id
Default server
[Optional]
Server ID configured using the jf c add
command.
--project
[Optional]
JFrog project key, to enable Xray to determine security violations accordingly. The command accepts this option only if the --repo-path
and --watches
options are not provided. If none of the three options are provided, the command will show all known vulnerabilities
--extended-table
false
When set to true
, result table includes extended fields such as CVSS
and Xray Issue Id
. Must be executed with --format table
.
--repo-path
[Optional]
Artifactory repository path, to enable Xray to determine violations accordingly. The command accepts this option only if the --project
and --watches
options are not provided. If none of the three options are provided, the command will show all known vulnerabilities
--watches
[Optional]
A comma-separated(,) list of Xray watches, to enable Xray to determine violations accordingly. The command accepts this option only if the --repo-path
and --repo-path
options are not provided. If none of the three options are provided, the command will show all known vulnerabilities
--licenses
false
Set if you'd also like the list of licenses to be displayed.
--format
table
Defines the output format of the command. Acceptable values are: table
and json
.
--fail
true
When using one of the flags --watches
, --project,
or --repo-path
and a Fail build rule is matched the command will return exit code 3. Set to false
if you want to see violations with exit code 0
.
--use-wrapper
false
[Gradle]
Set to true
if you'd like to use the Gradle and Maven wrapper.
--dep-type
all
[npm]
Defines npm dependency types. Acceptable values are: all
, devOnly
and prodOnly
--exclude-test-deps
false
[Gradle]
Set to true
if you'd like to exclude Gradle test dependencies from Xray scanning.
--requirements-file
[Optional] [Pip]
Defines pip requirements file name. For example: 'requirements.txt
'
--working-dirs
[Optional] A comma-separated(,) list of relative working directories, to determine the audit targets locations.
If flag isn't provided, a recursive scan is triggered from the root directory of the project.
--exclusions
.
git
;
node_modules
;
target
;
venv
;
test
List of semicolon-separated(;) exclusions, utilized to skip sub-projects from undergoing an audit. These exclusions may incorporate the *
and ?
wildcards.
--fixable-only
[Optional] Set to true if you wish to display issues that have a fix version only.
--min-severity
[Optional]
Set the minimum severity of issues to display. Acceptable values: Low
, Medium
, High
, or Critical
--threads
3
The number of parallel threads used to scan the source code project.
--go
false
Set to true to request audit for a Go project.
--gradle
false
Set to true to request audit for a Gradle project.
--mvn
false
Set to true to request audit for a Maven project.
--npm
false
Set to true to request audit for a npm project.
--pnpm
false
Set to true to request audit for a pnpm project.
--nuget
false
Set to true to request audit for a .Net project.
--pip
false
Set to true to request audit for a Pip project.
--pipenv
false
Set to true to request audit for a Pipenv project.
--yarn
false
Set to true to request audit for a Yarn project.
--sca
false
Selective scanners mode
Executes SCA (Software Composition Analysis) sub-scan. Use --sca
to run both SCA and Contextual Analysis. Use --sca --without-contextual-analysis
to to run SCA. Can be combined with --secrets
, --sast
, --iac.
--without-contextual-analysis
false
Selective scanners mode
Disable Contextual Analysis scanner after SCA. Relevant only with --sca
flag.
--iac
false
Selective scanners mode
Executes IaC sub-scan. Can be combined with --sca
, --secrets
and --sast
.
--secrets
false
Selective scanners mode
Executes Secrets sub-scan. Can be combined with --sca
, --sast,
and --iac
.
--validate-secrets
false
Selective scanners mode
Triggers token validation on found secrets. Relevant only with --secrets
flag.
--sast
false
Selective scanners mode
Executes SAST sub-scan. Can be combined with --sca
, --secrets,
and --iac
.
--vuln
[Optional] Set if you'd like to receive all vulnerabilities, regardless of the policy configured in Xray.
Command arguments
The command accepts no arguments
Command name
build-scan
Abbreviation
bs
Command options:
--server-id
[Optional] Server ID configured by the jf c add command. If not specified, the default configured server is used.
--vuln
[Optional] Set if you'd like to receive all vulnerabilities, regardless of the policy configured in Xray.
--fail
[Default: true] When using one of the flags --watches, --project or --repo-path and a Fail build rule is matched the command will return exit code 3. Set to false if you'd like to see violations with exit code 0.
--format
[Default: table] Defines the output format of the command. The accepted values are: table and json.
--project
[Optional] JFrog project key
--rescan
[Default: false] Set to true when scanning an already successfully scanned build, for example after adding an ignore rule.
Command arguments:
The command accepts two arguments.
Build name
Build name to be scanned.
Build number
Build number to be scanned.
Command name
sbom-enrich
Abbreviation
se
Command options
--server-id
[Optional] Server ID configured using the jf c add command. If not specified, the default configured server is used.
Command arguments
file_path
the sbom file path.
Command name
rt build-collect-env
Abbreviation
rt bce
Command arguments:
The command accepts two arguments.
Build name
Build name.
Build number
Build number.
Command options:
--project
[Optional] JFrog project key.
Command name
rt build-add-git
Abbreviation
rt bag
Command arguments:
The command accepts three arguments.
Build name
Build name.
Build number
Build number.
.git path
Optional - Path to a directory containing the .git directory. If not specific, the .git directory is assumed to be in the current directory or in one of the parent directories.
Command options:
--config
[Optional] Path to a yaml configuration file, used for collecting tracked project issues and adding them to the build-info.
--server-id
[Optional]
Server ID configured using the 'jf config' command. This is the server to which the build-info will be later published, using the jf rt build-publish
command. This option, if provided, overrides the serverID value in this command's yaml configuration. If both values are not provided, the default server, configured by the 'jf config' command, is used.
--project
[Optional] JFrog project key.
Property name
Description
Version
The schema version is intended for internal use. Do not change!
serverID
Artifactory server ID configured by the 'jf config' command. The command uses this server for fetching details about previous published builds. The --server-id command option, if provided, overrides the serverID value. If both the serverID property and the --server-id command options are not provided, the default server, configured by the 'jf config' command is used.
trackerName
The name (type) of the issue tracking system. For example, JIRA. This property can take any value.
regexp
A regular expression used for matching the git commit messages. The expression should include two capturing groups - for the issue key (ID) and the issue summary. In the example above, the regular expression matches the commit messages as displayed in the following example: HAP-1007 - This is a sample issue
keyGroupIndex
The capturing group index in the regular expression used for retrieving the issue key. In the example above, setting the index to "1" retrieves HAP-1007 from this commit message: HAP-1007 - This is a sample issue
summaryGroupIndex
The capturing group index in the regular expression for retrieving the issue summary. In the example above, setting the index to "2" retrieves the sample issue from this commit message: HAP-1007 - This is a sample issue
trackerUrl
The issue tracking URL. This value is used for constructing a direct link to the issues in the Artifactory build UI.
aggregate
Set to true, if you wish all builds to include issues from previous builds.
aggregationStatus
If aggregate is set to true, this property indicates how far in time should the issues be aggregated. In the above example, issues will be aggregated from previous builds, until a build with a RELEASE status is found. Build statuses are set when a build is promoted using the jf rt build-promote command.
Command name
rt build-add-dependencies
Abbreviation
rt bad
Command arguments:
The command takes three arguments.
Build name
The build name to add the dependencies to
Build number
The build number to add the dependencies to
Pattern
Specifies the local file system path to dependencies which should be added to the build info. You can specify multiple dependencies by using wildcards or a regular expression as designated by the --regexp command option. If you have specified that you are using regular expressions, then the first one used in the argument must be enclosed in parenthesis.
Command options:
When using the * or ; characters in the command options or arguments, make sure to wrap the whole options or arguments string in quotes (") to make sure the * or ; characters are not interpreted as literals.
--from-rt
[Default: false] Set to true to search the files in Artifactory, rather than on the local file system. The --regexp option is not supported when --from-rt is set to true.
--server-id
[Optional] Server ID configured using the 'jf config' command.
--spec
[Optional] Path to a File Spec.
--spec-vars
[Optional] List of semicolon-separated(;) variables in the form of "key1=value1;key2=value2;..." to be replaced in the File Spec. In the File Spec, the variables should be used as follows: ${key1}.
--recursive
[Default: true] When false, artifacts inside sub-folders in Artifactory will not be affected.
--regexp
[Optional: false] [Default: false] Set to true to use a regular expression instead of wildcards expression to collect files to be added to the build info.This option is not supported when --from-rt is set to true.
--dry-run
[Default: false] Set to true to only get a summery of the dependencies that will be added to the build info.
--module
[Optional] Optional module name in the build-info for adding the dependency.
--exclusions
A list of semicolon-separated(;) exclude patterns. Allows using wildcards or a regular expression according to the value of the regexp
option.
Command name
rt build-publish
Abbreviation
rt bp
Command arguments:
The command accepts two arguments.
Build name
Build name to be published.
Build number
Build number to be published.
Command options:
--server-id
[Optional] Server ID configured using the 'jf config' command. If not specified, the default configured Artifactory server is used.
--project
[Optional] JFrog project key.
--build-url
[Optional] Can be used for setting the CI server build URL in the build-info.
--env-include
[Default: *] List of semicolon-separated(;) patterns in the form of "value1;value2;..." Only environment variables that match those patterns will be included in the build info.
--env-exclude
[Default: password;secret;key] List of semicolon-separated(;) case insensitive patterns in the form of "value1;value2;..." environment variables match those patterns will be excluded.
--dry-run
[Default: false] Set to true to disable communication with Artifactory.
--insecure-tls
[Default: false] Set to true to skip TLS certificates verification.
--overwrite
[Default: false] Overwrites all existing occurrences of build infos with the provided name and number. Build artifacts will not be deleted.
Command name
rt build-append
Abbreviation
rt ba
Command arguments:
The command accepts four arguments.
Build name
The current (not yet published) build name.
Build number
The current (not yet published) build number,
build name to append
The published build name to append to the current build
build number to append
The published build number to append to the current build
Command options:
This command has no options.
Command name
rt build-promote
Abbreviation
rt bpr
Command arguments:
The command accepts three arguments.
Build name
Build name to be promoted.
Build number
Build number to be promoted.
Target repository
Build promotion target repository.
Command options:
--server-id
[Optional] Server ID configured using the 'jf config' command. If not specified, the default configured Artifactory server is used.
--project
[Optional] JFrog project key.
--status
[Optional] Build promotion status.
--comment
[Optional] Build promotion comment.
--source-repo
[Optional] Build promotion source repository.
--include-dependencies
[Default: false] If set to true, the build dependencies are also promoted.
--copy
[Default: false] If set true, the build artifacts and dependencies are copied to the target repository, otherwise they are moved.
--props
[Optional] List of semicolon-separated(;) properties in the form of "key1=value1;key2=value2,...". to attach to the build artifacts.
--dry-run
[Default: false] If true, promotion is only simulated. The build is not promoted.
--insecure-tls
[Default: false] Set to true to skip TLS certificates verification.
Command name
rt build-clean
Abbreviation
rt bc
Command arguments:
The command accepts two arguments.
Build name
Build name.
Build number
Build number.
Command options:
The command has no options.
Command name
rt build-discard
Abbreviation
rt bdi
Command arguments:
The command accepts one argument.
Build name
Build name.
Command options:
--server-id
[Optional] Server ID configured using the 'jf config' command. If not specified, the default configured Artifactory server is used.
--max-days
[Optional] The maximum number of days to keep builds in Artifactory.
--max-builds
[Optional] The maximum number of builds to store in Artifactory.
--exclude-builds
[Optional] List of comma-separated(,) build numbers in the form of "build1,build2,...", that should not be removed from Artifactory.
--delete-artifacts
[Default: false] If set to true, automatically removes build artifacts stored in Artifactory.
--async
[Default: false] If set to true, build discard will run asynchronously and will not wait for response.
Command-name
release-bundle-create
Abbreviation
rbc
Command arguments:
release bundle name
Name of the newly created Release Bundle.
release bundle version
Version of the newly created Release Bundle.
Command options:
--project
[Optional] Project key associated with the created Release Bundle version.
--server-id
[Optional] Platform Server ID configured using the 'jf config' command.
--signing-key
[Optional]
The GPG/RSA key-pair name defined in Artifactory. The signing-key
can also be configured as an environment variable. If no key is specified, Artifactory uses a default key.
--spec
[Optional]
Path to a File Spec. If you do not define the spec, you must include the build-name
and build-number
as environment variables, flags, or a combination of both (flags override environment variables).
--spec-vars
[Optional] List of semicolon-separated(;) variables in the form of "key1=value1;key2=value2;..." (wrapped by quotes) to be replaced in the File Spec. In the File Spec, the variables should be used as follows: ${key1}.
--build-name
[Optional] The name of the build from which to create the Release Bundle.
--build-number
[Optional] The number of the build from which to create the Release Bundle.
--sync
[Default: true] Set to false to run asynchronously.
Command-name
release-bundle-promote
Abbreviation
rbp
Command arguments:
release bundle name
Name of the Release Bundle to promote.
release bundle version
Version of the Release Bundle to promote.
environment
Name of the target environment for the promotion.
Command options:
--input-repos
[Optional] A list of semicolon-separated(;) repositories to include in the promotion. If this property is left undefined, all repositories (except those specifically excluded) are included in the promotion. If one or more repositories are specifically included, all other repositories are excluded.
--exclude-repos
[Optional] A list of semicolon-separated(;) repositories to exclude from the promotion.
--project
[Optional] Project key associated with the Release Bundle version.
--server-id
[Optional] Platform Server ID configured using the 'jf config' command.
--signing-key
[Mandatory] The GPG/RSA key-pair name given in Artifactory.
--sync
[Default: true] Set to false to run asynchronously.
--promotion-type
[Default: copy] Specifies the promotion type. (Valid values: move / copy) .
Command-name
release-bundle-distribute
Abbreviation
rbd
Command arguments:
release bundle name
Name of the Release Bundle to distribute.
release bundle version
Version of the Release Bundle to distribute.
Command options:
--city
[Default: *] Wildcard filter for site city name.
--country-codes
[Default: *] semicolon-separated(;) list of wildcard filters for site country codes.
--create-repo
[Default: false] Set to true to create the repository on the edge if it does not exist.
--dist-rules
[Optional] Path to a file, which includes the Distribution Rules in a JSON format. See the "Distribution Rules Structure" bellow.
--dry-run
[Default: false] Set to true to disable communication with JFrog Distribution.
--mapping-pattern
[Optional] Specify along with 'mapping-target' to distribute artifacts to a different path on the Edge node. You can use wildcards to specify multiple artifacts.
--mapping-target
[Optional] The target path for distributed artifacts on the edge node. If not specified, the artifacts will have the same path and name on the edge node, as on the source Artifactory server. For flexibility in specifying the distribution path, you can include placeholders in the form of {1}, {2} which are replaced by corresponding tokens in the pattern path that are enclosed in parenthesis.
--max-wait-minutes
[Default: 60] Max minutes to wait for sync distribution.
--project
[Optional] Project key associated with the Release Bundle version.
--server-id
[Optional] Platform Server ID configured using the 'jf config' command.
--site
[Default: *] Wildcard filter for site name.
--sync
[Default: true] Set to false to run asynchronously.
Command-name
release-bundle-delete-local
Abbreviation
rbdell
Command arguments:
release bundle name
Name of the Release Bundle to distribute.
release bundle version
Version of the Release Bundle to distribute.
environment
If provided, all promotions to this environment are deleted. Otherwise, the Release Bundle is deleted locally with all its promotions.
Command options:
--project
[Optional] Project key associated with the Release Bundle version.
--quiet
[Default: $CI] Set to true to skip the delete confirmation message.
--server-id
[Optional] Platform Server ID configured using the 'jf config' command.
--sync
[Default: true] Set to false to run asynchronously.
Command-name
release-bundle-delete-remote
Abbreviation
rbdelr
Command arguments:
release bundle name
Name of the Release Bundle to delete.
release bundle version
Version of the Release Bundle to delete.
Command options:
--city
[Default: *] Wildcard filter for site city name.
--country-codes
[Default: *] semicolon-separated(;) list of wildcard filters for site country codes.
--dist-rules
[Optional] Path to a file, which includes the Distribution Rules in a JSON format. See the "Distribution Rules Structure" bellow.
--dry-run
[Default: false] Set to true to disable communication with JFrog Distribution.
--max-wait-minutes
[Default: 60] Max minutes to wait for sync distribution.
--project
[Optional] Project key associated with the Release Bundle version.
--quiet
[Default: $CI] Set to true to skip the delete confirmation message.
--server-id
[Optional] Platform Server ID configured using the 'jf config' command.
--site
[Default: *] Wildcard filter for site name.
--sync
[Default: true] Set to false to run asynchronously.
Command-name
release-bundle-export
Abbreviation
rbe
Command arguments:
release bundle name
Name of the Release Bundle to export.
release bundle version
Version of the Release Bundle to export.
target pattern
The argument is optional and specifies the local file system target path.
If the target path ends with a slash, the path is assumed to be a directory. For example, if you specify the target as "repo-name/a/b/", then "b" is assumed to be a directory into which files should be downloaded.
If there is no terminal slash, the target path is assumed to be a file to which the downloaded file should be renamed. For example, if you specify the target as "a/b", the downloaded file is renamed to "b".
Command options:
--project
[Optional] Project key associated with the Release Bundle version.
--server-id
[Optional] Platform Server ID configured using the 'jf config' command.
mapping-pattern
[Optional] Specify a list of input regex mapping pairs that define where the queried artifact is located and where it should be placed after it is imported. Use this option if the path on the target is different than the source path.
mapping-target
[Optional] Specify a list of output regex mapping pairs that define where the queried artifact is located and where it should be placed after it is imported. Use this option if the path on the target is different than the source path.
split-count
[Optional] The maximum number of parts that can be concurrently uploaded per file during a multi-part upload. Set to 0 to disable multi-part upload.
min-split
[Optional] Minimum file size in KB to split into ranges when downloading. Set to -1 for no splits
Command-name
release-bundle-import
Abbreviation
rbi
Command arguments:
path to archive
Path to the Release Bundle archive on the filesystem.
Command options:
--project
[Optional] Project key associated with the Release Bundle version.
--server-id
[Optional] Platform Server ID configured using the 'jf config' command.
Command-name
mvn-config
Abbreviation
mvnc
Command options:
--global
[Optional] Set to true, if you'd like the configuration to be global (for all projects on the machine). Specific projects can override the global configuration.
--server-id-resolve
[Optional] Server ID for resolution. The server should configured using the 'jf rt c' command.
--server-id-deploy
[Optional] Server ID for deployment. The server should be configured using the 'jf rt c' command.
--repo-resolve-releases
[Optional] Resolution repository for release dependencies.
--repo-resolve-snapshots
[Optional] Resolution repository for snapshot dependencies.
--repo-deploy-releases
[Optional] Deployment repository for release artifacts.
--repo-deploy-snapshots
[Optional] Deployment repository for snapshot artifacts.
--include-patterns
[Optional] Filter deployed artifacts by setting a wildcard pattern that specifies which artifacts to include. You may provide multiple comma-separated(,) patterns followed by a white-space. For example artifact-.jar, artifact-.pom
--exclude-patterns
[Optional] Filter deployed artifacts by setting a wildcard pattern that specifies which artifacts to exclude. You may provide multiple comma-separated(,) followed by a white-space. For example artifact--test.jar, artifact--test.pom
--disable-snapshots
[Default:false] Set to true to disable snapshot resolution.
--snapshots-update-policy
[Optional] Set snapshot update policy. Defaults to daily.
Command arguments:
The command accepts no arguments
Command-name
mvn
Abbreviation
mvn
Command options:
--threads
[Default: 3] Number of threads for uploading build artifacts.
--build-name
[Optional] Build name. For more details, please refer to Build Integration.
--build-number
[Optional] Build number. For more details, please refer to Build Integration.
--project
[Optional] JFrog project key.
--insecure-tls
[Default: false] Set to true to skip TLS certificates verification.
--scan
[Default: false] Set if you'd like all files to be scanned by Xray on the local file system prior to the upload, and skip the upload if any of the files are found vulnerable.
--format
[Default: table] Should be used with the --scan option. Defines the scan output format. Accepts table or json as values.
Command arguments:
The command accepts the same arguments and options as the mvn client.
Command-name
gradle-config
Abbreviation
gradlec
Command options:
--global
[Optional] Set to true, if you'd like the configuration to be global (for all projects on the machine). Specific projects can override the global configuration.
--server-id-resolve
[Optional] Server ID for resolution. The server should configured using the 'jf c add' command.
--server-id-deploy
[Optional] Server ID for deployment. The server should be configured using the 'jf c add' command.
--repo-resolve
[Optional] Repository for dependencies resolution.
--repo-deploy
[Optional] Repository for artifacts deployment.
--uses-plugin
[Default: false] Set to true if the Gradle Artifactory Plugin is already applied in the build script.
--use-wrapper
[Default: false] Set to true if you'd like to use the Gradle wrapper.
--deploy-maven-desc
[Default: true] Set to false if you do not wish to deploy Maven descriptors.
--deploy-ivy-desc
[Default: true] Set to false if you do not wish to deploy Ivy descriptors.
--ivy-desc-pattern
[Default: '[organization]/[module]/ivy-[revision].xml' Set the deployed Ivy descriptor pattern.
--ivy-artifacts-pattern
[Default: '[organization]/[module]/[revision]/[artifact]-revision.[ext]' Set the deployed Ivy artifacts pattern.
Command arguments:
The command accepts no arguments
Command-name
gradle
Abbreviation
gradle
Command options:
--threads
[Default: 3] Number of threads for uploading build artifacts.
--build-name
[Optional] Build name. For more details, please refer to Build Integration.
--build-number
[Optional] Build number. For more details, please refer to Build Integration.
--project
[Optional] JFrog project key.
--scan
[Default: false] Set if you'd like all files to be scanned by Xray on the local file system prior to the upload, and skip the upload if any of the files are found vulnerable.
--format
[Default: table] Should be used with the --scan option. Defines the scan output format. Accepts table or json as values.
Command arguments:
The command accepts the same arguments and options as the gradle client.
Command-name
docker pull
Abbreviation
dpl
Command options:
--server-id
[Optional] Server ID configured using the 'jf config' command. If not specified, the default configured Artifactory server is used.
--build-name
[Optional] Build name. For more details, please refer to Build Integration.
--build-number
[Optional] Build number. For more details, please refer to Build Integration.
--project
[Optional] JFrog project key.
--module
[Optional] Optional module name for the build-info.
--skip-login
[Default: false] Set to true if you'd like the command to skip performing docker login.
Command arguments:
The same arguments and options supported by the docker client/
Command-name
docker push
Abbreviation
dp
Command options:
--server-id
[Optional] Server ID configured using the 'jf config' command. If not specified, the default configured Artifactory server is used.
--build-name
[Optional] Build name. For more details, please refer to Build Integration.
--build-number
[Optional] Build number. For more details, please refer to Build Integration.
--project
[Optional] JFrog project key.
--module
[Optional] Optional module name for the build-info.
--skip-login
[Default: false] Set to true if you'd like the command to skip performing docker login.
--threads
[Default: 3] Number of working threads.
--detailed-summary
[Default: false] Set true to include a list of the affected files as part of the command output summary.
Command arguments:
The same arguments and options supported by the docker client/
Command-name
rt podman-pull
Abbreviation
rt ppl
Command options:
--server-id
[Optional] Server ID configured using the 'jf config' command. If not specified, the default configured Artifactory server is used.
--build-name
[Optional] Build name. For more details, please refer to Build Integration.
--build-number
[Optional] Build number. For more details, please refer to Build Integration.
--project
[Optional] JFrog project key.
--module
[Optional] Optional module name for the build-info.
--skip-login
[Default: false] Set to true if you'd like the command to skip performing docker login.
Command argument
Image tag
The docker image tag to pull.
Source repository
Source repository in Artifactory.
Command-name
rt podman-push
Abbreviation
rt pp
Command options:
--server-id
[Optional] Server ID configured using the 'jf config' command. If not specified, the default configured Artifactory server is used.
--build-name
[Optional] Build name. For more details, please refer to Build Integration.
--build-number
[Optional] Build number. For more details, please refer to Build Integration.
--project
[Optional] JFrog project key.
--module
[Optional] Optional module name for the build-info.
--skip-login
[Default: false] Set to true if you'd like the command to skip performing docker login.
--threads
[Default: 3] Number of working threads.
--detailed-summary
[Default: false] Set to true to include a list of the affected files as part of the command output summary.
Command argument
Image tag
The docker image tag to push.
Target repository
Target repository in Artifactory.
Command-name
rt build-docker-create
Abbreviation
rt bdc
Command options:
--image-file
Path to a file which includes one line in the following format: IMAGE-TAG@sha256:MANIFEST-SHA256. For example: cat image-file-details superfrog-docker.jfrog.io/hello-frog@sha256:30f04e684493fb5ccc030969df6de0
--server-id
[Optional] Server ID configured using the 'jf config' command. If not specified, the default configured Artifactory server is used.
--build-name
[Optional] Build name. For more details, please refer to Build Integration.
--build-number
[Optional] Build number. For more details, please refer to Build Integration.
--project
[Optional] JFrog project key.
--module
[Optional] Optional module name for the build-info.
--threads
[Default: 3] Number of working threads.
Command argument
Target repository
The name of the repository to which the image was pushed.
Command-name
rt docker-promote
Abbreviation
rt dpr
Command options:
--server-id
[Optional] Server ID configured using the 'jf config' command. If not specified, the default configured Artifactory server is used.
--copy
[Default: false] If set true, the Docker image is copied to the target repository, otherwise it is moved.
--source-tag
[Optional] The tag name to promote.
--target-docker-image
[Optional] Docker target image name.
--target-tag
[Optional] The target tag to assign the image after promotion.
Command argument
source docker image
The docker image name to promote.
source repository
Source repository in Artifactory.
target repository
Target repository in Artifactory.
Command-name
npm-config
Abbreviation
npmc
Command options:
--global
[Optional] Set to true, if you'd like the configuration to be global (for all projects on the machine). Specific projects can override the global configuration.
--server-id-resolve
[Optional] Artifactory server ID for resolution. The server should configured using the 'jf c add' command.
--server-id-deploy
[Optional] Artifactory server ID for deployment. The server should be configured using the 'jf c add' command.
--repo-resolve
[Optional] Repository for dependencies resolution.
--repo-deploy
[Optional] Repository for artifacts deployment.
Command arguments:
The command accepts no arguments
Command-name
npm
Abbreviation
Command options:
--build-name
[Optional] Build name. For more details, please refer to Build Integration.
--build-number
[Optional] Build number. For more details, please refer to Build Integration.
--project
[Optional] JFrog project key.
--module
[Optional] Optional module name for the build-info.
--threads
[Default: 3] Number of working threads for build-info collection.
Command arguments:
The command accepts the same arguments and options as the npm client.
--project
[Optional] JFrog project key.
--module
[Optional] Optional module name for the build-info.
Command arguments:
The command accepts the same arguments and options as the npm client.
Command-name
npm publish
Abbreviation
Command options:
--build-name
[Optional] Build name. For more details, please refer to Build Integration.
--build-number
[Optional] Build number. For more details, please refer to Build Integration.
--project
[Optional] JFrog project key.
--module
[Optional] Optional module name for the build-info.
--detailed-summary
[Default: false] Set true to include a list of the affected files as part of the command output summary.
--scan
[Default: false] Set if you'd like all files to be scanned by Xray on the local file system prior to the upload, and skip the upload if any of the files are found vulnerable.
--format
[Default: table] Should be used with the --scan option. Defines the scan output format. Accepts table or JSON as values.
Command argument
The command accepts the same arguments and options that the npm pack command expects.
Command-name
yarn-config
Abbreviation
yarnc
Command options:
--global
[Optional] Set to true, if you'd like the configuration to be global (for all projects on the machine). Specific projects can override the global configuration.
--server-id-resolve
[Optional] Artifactory server ID for resolution. The server should configured using the 'jf c add' command.
--repo-resolve
[Optional] Repository for dependencies resolution.
Command arguments:
The command accepts no arguments
Command-name
yarn
Command options:
--build-name
[Optional] Build name. For more details, please refer to Build Integration.
--build-number
[Optional] Build number. For more details, please refer to Build Integration.
--project
[Optional] JFrog project key.
--module
[Optional] Optional module name for the build-info.
--threads
[Default: 3] Number of working threads for build-info collection.
Command arguments:
The command accepts the same arguments and options as the yarn client.
Command-name
go-config
Abbreviation
Command options:
--global
[Default false] Set to true, if you'd like the configuration to be global (for all projects on the machine). Specific projects can override the global configuration.
--server-id-resolve
[Optional] Artifactory server ID for resolution. The server should configured using the 'jf c add' command.
--server-id-deploy
[Optional] Artifactory server ID for deployment. The server should be configured using the 'jf c add' command.
--repo-resolve
[Optional] Repository for dependencies resolution.
--repo-deploy
[Optional] Repository for artifacts deployment.
Command-name
go
Abbreviation
go
Command options:
--build-name
[Optional] Build name. For more details, please refer to Build Integration.
--build-number
[Optional] Build number. For more details, please refer to Build Integration.
--project
[Optional] JFrog project key.
--no-fallback
[Default false] Set to avoid downloading packages from the VCS, if they are missing in Artifactory.
--module
[Optional] Optional module name for the build-info.
Command arguments:
Go command
The command accepts the same arguments and options as the go client.
Command-name
go-publish
Abbreviation
gp
Command options:
--build-name
[Optional] Build name. For more details, please refer to Build Integration.
--build-number
[Optional] Build number. For more details, please refer to Build Integration.
--project
[Optional] JFrog project key.
--module
[Optional] Optional module name for the build-info.
--detailed-summary
[Default: false] Set true to include a list of the affected files as part of the command output summary.
Command argument
Version
The version of the Go project that is being published
Command-name
pip-config / pipenv-config
Abbreviation
pipc / pipec
Command options:
--global
[Default false] Set to true, if you'd like the configuration to be global (for all projects on the machine). Specific projects can override the global configuration.
--server-id-resolve
[Optional] Artifactory server ID for resolution. The server should configured using the 'jf c add' command.
--repo-resolve
[Optional] Repository for dependencies resolution.
--server-id-deploy
[Optional] Artifactory server ID for deployment. The server should configured using the 'jf c add' command.
--repo-deploy
[Optional] Repository for artifacts deployment.
Command-name
pip / pipenv
Abbreviation
Command options:
--build-name
[Optional] Build name. For more details, please refer to Build Integration.
--build-number
[Optional] Build number. For more details, please refer to Build Integration.
--project
[Optional] JFrog project key.
--module
[Optional] Optional module name for the build-info.
Command argument
The command accepts the same arguments and options as the pip / pipenv clients.
Command-name
twine
Abbreviation
Command options:
--build-name
[Optional] Build name. For more details, please refer to Build Integration.
--build-number
[Optional] Build number. For more details, please refer to Build Integration.
--project
[Optional] JFrog project key.
--module
[Optional] Optional module name for the build-info.
Command argument
The command accepts the arguments and options supported by twine client, except for repository configuration and authentication options.
Command-name
poetry-config
Abbreviation
poc
Command options:
--global
[Default false] Set to true, if you'd like the configuration to be global (for all projects on the machine). Specific projects can override the global configuration.
--server-id-resolve
[Optional] Artifactory server ID for resolution. The server should configured using the 'jf c add' command.
--repo-resolve
[Optional] Repository for dependencies resolution.
Command-name
poetry
Abbreviation
Command options:
--build-name
[Optional] Build name. For more details, please refer to Build Integration.
--build-number
[Optional] Build number. For more details, please refer to Build Integration.
--project
[Optional] JFrog project key.
--module
[Optional] Optional module name for the build-info.
Command argument
The command accepts the same arguments and options as the poetry clients.
Command-name
nuget-config / dotnet-config
Abbreviation
nugetc / dotnetc
Command options:
--global
[Optional] Set to true, if you'd like the configuration to be global (for all projects on the machine). Specific projects can override the global configuration.
--server-id-resolve
[Optional] Artifactory server ID for resolution. The server should configured using the 'jf c add' command.
--repo-resolve
[Optional] Repository for dependencies resolution.
--nuget-v2
[Default: false] Set to true if you'd like to use the NuGet V2 protocol when restoring packages from Artifactory (instead of NuGet V3).
Command arguments:
The command accepts no arguments
Command-name
nuget / dotnet
Abbreviation
Command options:
--build-name
[Optional] Build name. For more details, please refer to Build Integration.
--build-number
[Optional] Build number. For more details, please refer to Build Integration.
--project
[Optional] JFrog project key.
--module
[Optional] Optional module name for the build-info.
Command argument
The command accepts the same arguments and options as the NuGet client / .NET Core CLI.
Command-name
terraform-config
Abbreviation
tfc
Command options:
--global
[Optional] Set to true, if you'd like the configuration to be global (for all projects on the machine). Specific projects can override the global configuration.
--server-id-deploy
[Optional] Artifactory server ID for deployment. The server should configured using the 'jf c add' command.
--repo-deploy
[Optional] Repository for artifacts deployment.
Command arguments:
The command accepts no arguments
Command-name
terraform publish
Abbreviation
tf p
Command options:
--namespace
[Mandatory] Terraform module namespace
--provider
[Mandatory] Terraform module provider
--tag
[Mandatory] Terraform module tag
--exclusions
[Optional] A list of semicolon-separated(;) exclude patterns wildcards. Paths inside the module matching one of the patterns are excluded from the deployed package.
--build-name
[Optional] Build name. For more details, please refer to Build Integration.
--build-number
[Optional] Build number. For more details, please refer to Build Integration.
--project
Command argument
The command accepts no arguments
This feature is supported in JFrog CLI version 2.60.0
The git count-contributors
command allows JFrog users to easily determine the number of Git developers contributing to their code. The counts indicate the number of contributing developers to the default branch.
The command counts the contributing developers for all commits performed within a specified time range. The results are based on email addresses, thus giving you a specific number of unique developers.
We provide several options to obtain the developer count:
A single repository: Analyze a single Git repository by providing the repository name.
Across a project/group: Analyze multiple repositories organized under a project/group by providing the owner command option.
Across multiple Git servers: Analyze repositories across various Git servers by providing a YAML file as an input file with the required parameters outlined below.
This information can be helpful when purchasing an Advanced Security subscription, as the number of developers is often a key factor in pricing.
Supported Git providers:
GitHub
GitLab
Bitbucket
The CLI outputs may include an estimation of the contributing developers based on the input provided by the user. They may be based on third-party resources and databases and JFrog does not guarantee that the CLI outputs are accurate and/or complete. The CLI outputs are not legal advice and you are solely responsible for your use of it. CLI outputs are provided "as is" and any representation or warranty of or concerning any third-party technology is strictly between the user and the third-party owner or distributor of the third-party technology.
The git count-contributors
command can be run from the JFrog CLI with the following syntax:
Command Option
Description
--scm-type
[Mandatory]
The type of SCM to use for the analysis.
Supported Values: github, gitlab, bitbucket
Example: --scm-type=github
--scm-api-url
[Mandatory]
The base URL of the SCM system's API endpoint.
Format: The full URL, including the protocol Example: --scm-api-url=https://api.github.com
--token
[Mandatory]
The authentication token required to access the SCM system's API. In the absence of a flag, tokens should be passed in the JF_GIT_TOKEN environment variable, or the corresponding environment variables 'JFROG_CLI_GITLAB_TOKEN, JFROG_CLI_GITHUB_TOKEN or JFROG_CLI_BITBUCKET_TOKEN' Example: --token:your_access_token
--owner
[Mandatory]
The owner or organization of the repositories to be analyzed. Format: Depending on the Git provider. On GitHub and GitLab, the owner is typically an individual or an organization, On Bitbucket, the owner can also be a project. In the case of a private instance on Bitbucket, the individual or organization name should be prefixed with '~'. When using this option without a specific repository name, all repositories will be analyzed at the group/project level. Example: owner=your-organization
--months
[Optional]
The number of months to analyze for developer activity. Default: 1
Example: --months=6
--detailed-summary
[Optional]
Generates a more detailed summary of the contributors. Default: false
Example: --detailed-summary=true
--repo-name
[Optional]
List of semicolon-separated(;) repositories names to analyze, If not provided all repositories related to the provided owner will be analyzed. Example: --repo-name=repo1;repo2
--input-file
[Optional]
The path to an input file in YAML format that contains multiple git providers. Example: --input-file="/Users/path/to/file/input.yaml"
--verbose
[Optional]
Enables verbose output, providing more detailed information.
Single Repository
Required Parameters:
--scm-type
--scm-api-url
--token
--repo-name
Group/Project
Required Parameters:
--scm-type
--scm-api-url
--token
--owner
Multiple Git Servers- YAML File
Sample Output:
JFrog supports the following Package Managers for Visual Studio Code:
Go | Maven | npm | pnpm | Yarn | Pip | Pipenv | Poetry | .Net CLI | NuGet
Additional SCA capabilities supported:
License Violations
Autofix for direct dep
Note: Exclude dev dependencies is supported for npm and pnpm only.
JFrog supports Contextual Analysis, Secrets, Infrastructure as Code (IaC), and SAST for Visual Studio Code. Follow the links to learn more about each feature and its supported technologies and languages.
Behind the scenes, the JFrog VS Code Extension scans all the project dependencies, both direct and indirect (transitive), even if they are not declared in the project's go.mod. It builds the Go dependencies tree by running go mod graph
and intersecting the results with go list -f '{{with .Module}}{{.Path}} {{.Version}}{{end}}' all
command. Therefore, please make sure to have Go CLI in your system PATH.
The JFrog VS Code Extension builds the Maven dependencies tree by running mvn dependency:tree
. View licenses and top issue severities directly from the pom.xml.
Important notes:
To have your project dependencies scanned by JFrog Xray, make sure Maven is installed, and that the mvn command is in your system PATH.
For projects which include the Maven Dependency Plugin as a build plugin, with include or exclude configurations, the scanning functionality is disabled. For example:
Behind the scenes, the extension builds the npm dependencies tree by running npm list
. View licenses and top issue severities directly from the package.json.
Important: To have your project dependencies scanned by JFrog Xray, make sure the npm CLI is installed on your local machine and that it is in your system PATH. In addition, the project dependencies must be installed using npm install
.
Behind the scenes, the extension builds the Yarn dependencies tree by running yarn list
. View licenses and top issue severities directly from the yarn.lock.
Important:
To have your project dependencies scanned by JFrog Xray, make sure the Yarn CLI is installed on your local machine and that it is in your system PATH.
Yarn v2 is not yet supported.
Behind the scenes, the extension builds the Pypi dependencies tree by running pipdeptree
on your Python virtual environment. It also uses the Python interpreter path configured by the Python extension. View licenses and top issue severities directly from your requirements.txt files. The scan your Pypi dependencies, make sure the following requirements are met:
The Python extension for VS Code is installed.
Depending on your project, Please make sure Python 2 or 3 are included in your system PATH.
Create and activate a virtual env as instructed in VS-Code documentation. Make sure that Virtualenv Python interpreter is selected as instructed here.
Open a new terminal and activate your Virtualenv:
On macOS and Linux:
On Windows:
In the same terminal, install your python project and dependencies according to your project specifications.
For .NET projects which use NuGet packages as dependencies, the extension displays the NuGet dependencies tree, together with the information for each dependency. Behind the scenes, the extension builds the NuGet dependencies tree using the NuGet deps tree npm package.
Important:
Does your project define its NuGet dependencies using a packages.config file? If so, then please make sure the nuget
CLI is installed on your local machine and that it is in your system PATH. The extension uses the nuget
CLI to find the location of the NuGet packages on the local file-system.
The project must be restored using nuget restore
or dotnet restore
prior to scanning. After this action, you should click on the Refresh button, for the tree view to be refreshed and updated.
The Command Summaries feature enables the recording of JFrog CLI command outputs into the local file system. This functionality can be used to generate a summary in the context of an entire workflow (a sequence of JFrog CLI commands) and not only in the scope of a specific command.
An instance of how Command Summaries are utilized can be observed in the setup-cli GitHub action. This action employs the compiled markdown to generate a comprehensive summary of the entire workflow.
jf rt build-publish
jf rt upload
jf scan
jf build-scan
Each command execution that incorporates this feature can save data files into the file system. These files are then used to create an aggregated summary in Markdown format.
Saving data to the filesystem is essential because CLI command executes in separate contexts. Consequently, each command that records new data should also incorporate any existing data into the aggregated markdown. This is required because the CLI cannot determine when a command will be the last one executed in a sequence of commands.
The CLI does not automatically remove the files as they are designed to remain beyond a single execution. As a result, it is your responsibility to you to manage your pipelines and delete files as necessary. You can clear the entire directory of JFROG_CLI_COMMAND_SUMMARY_OUTPUT_DIR
that you have configured to activate this feature.
To use the Command Summaries, you'll need to set the JFROG_CLI_COMMAND_SUMMARY_OUTPUT_DIR
environment variable. This variable designates the directory where the data files and markdown files will be stored.
If you wish to contribute a new CLI command summary to the existing ones, you can submit a pull request once you've followed these implementation guidelines:
Implement the CommandSummaryInterface
Record data during runtime
The GenerateMarkdownFromFiles
function needs to process multiple data files, which are the results of previous command executions, and generate a single markdown string content. As each CLI command has its own context, we need to regenerate the entire markdown with the newly added results each time.
Each command that implements the CommandSummaryInterface
will have its own subdirectory inside the JFROG_CLI_COMMAND_SUMMARY_OUTPUT_DIR/JFROG_COMMAND_SUMMARY
directory.
Every subdirectory will house data files, each one corresponding to a command recording, along with a markdown file that has been created from all the data files. The function implemented by the user is responsible for processing all the data files within its respective subdirectory and generating a markdown string.
JFrog CLI Plugins allow enhancing the functionality of JFrog CLI to meet the specific user and organization needs. The source code of a plugin is maintained as an open source Go project on GitHub. All public plugins are registered in JFrog CLI's Plugins Registry. We encourage you, as developers, to create plugins and share them publicly with the rest of the community. When a plugin is included in the registry, it becomes publicly available and can be installed using JFrog CLI. Read the JFrog CLI Plugins Developer Guide if you wish to create and publish your own plugins.
A plugin which is included JFrog CLI's Plugins Registry can be installed using the following command.
This command will install the plugin from the official public registry by default. You can also install a plugin from a private JFrog CLI Plugin registry, as described in the Private Plugins Registries section.
In addition to the public official JFrog CLI Plugins Registry, JFrog CLI supports publishing and installing plugins to and from private JFrog CLI Plugins Registries. A private registry can be hosted on any Artifactory server. It uses a local generic Artifactory repository for storing the plugins.
To create your own private plugins registry, follow these steps.
On your Artifactory server, create a local generic repository named jfrog-cli-plugins.
Make sure your Artifactory server is included in JFrog CLI's configuration, by running the jf c show command.
If needed, configure your Artifactory instance using the jf c add command.
Set the ID of the configured server as the value of the JFROG_CLI_PLUGINS_SERVER environment variable.
If you wish the name of the plugins repository to be different from jfrog-cli-plugins, set this name as the value of the JFROG_CLI_PLUGINS_REPO environment variable.
The jf plugin install command will now install plugins stored in your private registry.
To publish a plugin to the private registry, run the following command, while inside the root of the plugin's sources directory. This command will build the sources of the plugin for all the supported operating systems. All binaries will be uploaded to the configured registry.
The cost of remediating a vulnerability is akin to the cost of fixing a bug. The earlier you remediate a vulnerability in the release cycle, the lower the cost. The extension allows developers to find and fix security vulnerabilities in their projects and to see valuable information about the status of their code by continuously scanning it locally with JFrog Xray.
Software Composition Analysis (SCA)
Scan your project dependencies for security issues. For selected security issues, get leverage-enhanced CVE data that is provided by our JFrog Security Research team. To learn more about enriched CVEs, see here.
Requires Xray version 3.66.5 or above and Enterprise X / Enterprise+ subscription with Advanced Security.
Contextual Analysis
With advanced Contextual Analysis, understand the applicability of CVEs in your application and utilize JFrog Security scanners to analyze the way you use 3rd party packages in your projects. Automatically validate some high-impact vulnerabilities, such as vulnerabilities that have prerequisites for exploitations, and reduce false positives and vulnerability noise with smart CVE analysis. To learn more, see here.
Infrastructure as Code (IaC) Scan
Analyze Infrastructure as Code (IaC) files, such as Terraform, to identify security vulnerabilities and misconfigurations before deploying your cloud infrastructure.
Get actionable insights and recommendations for securing your IaC configurations.
Secrets Detection
Detect and prevent the inclusion of sensitive information, such as credentials and API keys, in your codebase.
Additional Perks
Security issues are easily visible inline.
The results show issues with context, impact, and remediation.
View all security issues in one place, in the JFrog tab.
For Security issues with an available fixed version, you can upgrade to the fixed version within the plugin.
Track the status of the code while it is being built, tested, and scanned on the CI server.
The extension also applies JFrog File Spec JSON schema on the following file patterns: **/filespecs/*.json
, *filespec*.json
and *.filespec
. Read more about JFrog File specs here.
JFrog's IDE support significantly streamlines the development process, allowing developers to discover and remediate security issues as early as possible in the release cycle.
Supported IDEs
The extension offers two modes, Local and CI. The two modes can be toggled by pressing on their respective buttons that will appear next to the components tree.
The Local view displays information about the local code as it is being developed in VS Code. The developer can scan their local workspace continuously. The information is displayed in the Local view.
The CI view allows the tracking of the code as it is built, tested and scanned by the CI server. It displays information about the status of the build and includes a link to the build log on the CI server.
The icon demonstrates the top severity issue of a selected component and its transitive dependencies. The following table describes the severities from highest to lowest:
Critical
Issue with critical severity
High
Issue with high severity
Medium
Issue with medium severity
Low
Issue with low severity
Unknown
Issue with unknown severity
Not Applicable
CVE issue that is not applicable to your source code
Normal
No issues (Used only in CI view)
The JFrog VS Code Extension enables continuous scans of your project with the JFrog Platform. The security related information will be displayed under the Local view. It allows developers to view vulnerability information about their dependencies and source code in their IDE. With this information, you can make an informed decision on whether to use a component or not before it gets entrenched into the organization’s product.
The CI view of the extension allows you to view information about your builds directly from your CI system. This allows developers to keep track of the status of their code, while it is being built, tested and scanned as part of the CI pipeline, regardless of the CI provider used.
This information can be viewed inside JFrog VS Code Extension, from the JFrog Panel, after switching to CI mode.
The following details can be made available in the CI view.
Status of the build run (passed or failed)
Build run start time
Git branch and latest commit message
Link to the CI run log
Security information about the build artifacts and dependencies
The CI information displayed in VS Code is pulled by the JFrog Extension directly from JFrog Artifactory. This information is stored in Artifactory as part of the build-info, which is published to Artifactory by the CI server.
Read more about build-info in the Build Integration documentation page. If the CI pipeline is also configured to scan the build-info by JFrog Xray, the JFrog VS Code Extension will pull the results of the scan from JFrog Xray and display them in the CI view as well.
Before VS Code can display information from your CI in the CI View, your CI pipeline needs to be configured to expose this data. Read this guide which describes how to configure your CI pipeline.
Set your CI build name in the Build name pattern field at the Extension Settings. This is the name of the build published to Artifactory by your CI pipeline. You have the option of setting * to view all the builds published to Artifactory.
The extension is available to install from the VS Code extensions marketplace. After installing the JFrog extension tab will appear in the activity bar.
To access the extension settings, click on the gear icon:
By default, paths containing the words .git
, test
, venv
and node_modules
are excluded from Xray scan. The exclude pattern can be configured in the Extension Settings.
If your JFrog environment is behind an HTTP/S proxy, follow these steps to configure the proxy server:
Go to Preferences --> Settings --> Application --> Proxy
Set the proxy URL under Proxy
.
Make sure 'Proxy Support' is override
or on
.
Alternatively, you can use the HTTP_PROXY and HTTPS_PROXY environment variables.
JFrog VS Code extension requires necessary resources for scanning your projects. By default, the JFrog VS Code extension downloads the resources it requires from https://releases.jfrog.io. If the machine running JFrog VS Code extension has no access to it, follow these steps to allow the resources to be downloaded through an Artifactory instance, which the machine has access to:
Login to the JFrog Platform UI, with a user who has admin permissions.
Create a Remote Repository with the following properties set:
Under the Basic
tab:
Package Type: Generic
Repository Key: jfrog-releases-repository
Under the Advanced
tab:
Uncheck the 'Store Artifacts Locally' option
Navigate to the Settings in JFrog VS Code Extension
Insert the Repository Key you created in the Repository key text field
Or set the JFROG_IDE_RELEASES_REPO
environment variable with the Repository Key you created.
If your proxy server requires credentials, follow these steps:
Follow 1-3 steps under Proxy configuration.
Basic authorization
Encode with base64: [Username]:[Password]
.
Under 'Proxy Authorization' click on 'Edit in settings.json'.
Add to settings.json:
"http.proxyAuthorization": "Basic [Encoded credentials]"
.
Access token authorization
Under 'Proxy Authorization' click on 'Edit in settings.json'.
Add to settings.json:
"http.proxyAuthorization": "Bearer [Access token]"
.
Example
Username: foo
Password: bar
settings.json:
You can configure the JFrog VS-Code extension to use the security policies you create in Xray. Policies enable you to create a set of rules, in which each rule defines security criteria, with a corresponding set of automatic actions according to your needs. Policies are enforced when applying them to Watches.
If you'd like to use a JFrog Project that is associated with the policy, follow these steps:
Create a JFrog Project, or obtain the relevant JFrog Project key.
Create a Policy on JFrog Xray.
Create a Watch on JFrog Xray and assign your Policy and Project as resources to it.
Configure your Project key in the Extension Settings.
If however your policies are referenced through an Xray Watch or Watches, follow these steps instead:
Create one or more Watches on JFrog Xray.
Configure your Watches in the Extension Settings.
Change the log level to debug
, info
, warn
, or err
in the Extension Settings.
Once the JFrog Extension is installed in VS Code, click on the JFrog tab:
This will open the Sign in page:
Fill in your connection details and click on the Sign In
button to start using the extension
Note: If you would like to use custom URLs for Artifactory or Xray, click on Advanced
.
You can also choose other option to authenticate with your JFrog Platform instance:
To sign in using SSO, follow these steps:
On the sign-in page, click the Continue with SSO
button:
After entering your JFrog platform URL, click on Sign in With SSO
.
It will take a few seconds for the browser to redirect you to the SSO sign in page.
You should now be signed in in at vscode.
If JFrog CLI is installed on your machine and is configured with your JFrog Platform connection details, then you should see the message popup in the Sigh in page:
You may set the connection details using the following environment variables. VS Code will read them after it is launched.
JFROG_IDE_URL
- JFrog URL
JFROG_IDE_USERNAME
- JFrog username
JFROG_IDE_PASSWORD
- JFrog password
JFROG_IDE_ACCESS_TOKEN
- JFrog access token
JFROG_IDE_STORE_CONNECTION
- Set the value of this environment variable to true, if you'd like VS Code to store the connection details after reading them from the environment variables.
Once the above environment variables are configured, you can expect to see a message popup in the Sigh in page:
Note: For security reasons, it is recommended to unset the environment variables after launching VS Code.
JFrog provides you the ability to migrate from a self-hosted JFrog Platform installation to JFrog Cloud so that you can seamlessly transition into JFrog Cloud. You can use the JFrog CLI to transfer the Artifactory configuration settings and binaries to JFrog Cloud.
JFrog Cloud provides the same cutting-edge functionalities of a self-hosted JFrog Platform Deployment (JPD), without the overhead of managing the databases and systems. If you are an existing JFrog self-hosted customer, you might want to move to JFrog Cloud to ease operations. JFrog provides a solution that allows you to replicate your self-hosted JPD to a JFrog Cloud JPD painlessly.
The Artifactory Transfer solution currently transfers the config and data of JFrog Artifactory only. Other products such as JFrog Xray, and Distribution are currently not supported by this solution.
In this page, we refer to the source self-hosted instance as the source instance, and the target JFrog Cloud instance as the target instance.
Artifactory Version Support: The Artifactory Transfer solution is supported for any version of Artifactory 7.x and Artifactory version 6.23.21 and above. If your current Artifactory version is not of compatible version, please consider upgrading the Artifactory instance.
Supported OS Platforms: The transfer tool can help transfer the files and configuration from operating systems of all types, including Windows and Container environments.
The following limitations need to be kept in mind before you start the migration process
The Archive Search Enabled feature is not supported on JFrog Cloud.
Artifactory System Properties are not transferred and JFrog Cloud defaults are applied after the transfer.
User plugins are not supported on JFrog Cloud.
Artifact Cold Storage is not supported in JFrog Cloud.
Artifacts in remote repositories caches are not transferred.
Federated repositories are transferred without their federation members. After the transfer, you'll need to reconfigure the federation as described in the Federated Repositories documentation. Federated Repositories
Docker repositories with names that include dots or underscores aren't allowed in JFrog Cloud.
Artifact properties with a value longer than 2.4K characters are not supported in JFrog Cloud. Such properties are generally seen in Conan artifacts. The artifacts will be transferred without the properties in this case. A report with these artifacts will become available to you at the end of the transfer.
The files transfer process allows transferring files that were created or modified on the source instance after the process started. However:
Files that were deleted on the source instance after the process started, are not deleted on the target instance by the process.
The custom properties of those files are also updated on the target instance. However, if only the custom properties of those files were modified on the source, but not the files' content, the properties are not modified on the target instance by the process.
When transferring files in build-info repositories, JFrog CLI limits the total of working threads to 8. This is done to limit the load on the target instance while transferring build-info.
The transfer process includes two phases, that you must perform in the following order:
Configuration Transfer: Transfers the configuration entities like users, permissions, and repositories from the source instance to the target instance.
File Transfer: Transfers the files (binaries) stored in the source instance repositories to the target instance repositories.
Note
Files that are cached by remote repositories aren't transferred.
The content of Artifactory's Trash Can isn't transferred.
You can do both steps while the source instance is in use. No downtime on the source instance is required while the transfer is in progress.
If your source instance hosts files that are larger than 25 GB, they will be blocked during the transfer. To learn how to check whether large files are hosted by your source instance, and what to do in that case, read this section.
Ensure that you can log in to the UI of both the source and target instances with users that have admin permissions.
Ensure that the target instance license does not support fewer features than the source instance license.
Run the file transfer pre-checks as described here.
Ensure that all the remote repositories on the source Artifactory instance have network access to their destination URL once they are created in the target instance. Even if one remote or federated repository does not have access, the configuration transfer operation will be cancelled. You do have the option of excluding specific repositories from being transferred.
Ensure that all the replications configured on the source Artifactory instance have network access to their destination URL once they are created in the target instance.
Ensure that you have a user who can log in to MyJFrog.
Ensure that you can log in to the primary node of your source instance through a terminal.
Perform the following steps to transfer configuration and artifacts from the source to the target instance. You must run the steps in the exact sequence and do not run any of the commands in parallel.
By default, the target does not have the APIs required for the configuration transfer. Enabling the target instance for configuration transfer is done through MyJFrog. Once the configuration transfer is complete, you must disable the configuration transfer in MyJFrog as described in Step 4 below.
Warning
Enabling configuration transfer will trigger a shutdown of JFrog Xray, Distribution, Insights and Pipelines in the cloud and these services will therefore become unavailable. Once you disable the configuration transfer later on in the process, these services will be started up again.
Enabling configuration transfer will scale down JFrog Artifactory, which will reduce its available resources. Once you disable the configuration transfer later on in the process, Artifactory will be scaled up again.
Follow the below steps to enable the configuration transfer.
Log in to MyJFrog.
Click on Settings.
If you have an Enterprise+ subscription with more than one Artifactory instance, select the target instance from the drop-down menu.
The configuration transfer is now enabled, and you can continue with the transfer process.
To set up the source instance, you must install the data-transfer user plugin in the primary node of the source instance. This section guides you through the installation steps.
Install JFrog CLI on the primary node machine of the source instance as described here.
Configure the connection details of the source Artifactory instance with your admin credentials by running the following command from the terminal.
Ensure that the JFROG_HOME environment variable is set and holds the value of the JFrog installation directory. It usually points to the /opt/jfrog directory. In case the variable isn't set, set its value to point to the correct directory as described in the JFrog Product Directory Structure article.System Directories
If the source instance has internet access, follow this single step:
Download and install the data-transfer user plugin by running the following command from the terminal
If the source instance has no internet access, follow these steps instead.
Download the following two files from a machine that has internet access: data-transfer.jar and dataTransfer.groovy.
Create a new directory on the primary node machine of the source instance and place the two files you downloaded inside this directory.
Install the data-transfer user plugin by running the following command from the terminal. Replace the <plugin files dir>
token with the full path to the directory which includes the plugin files you downloaded.
If the above is not an option, you may also load the transfer plugin manually into the on-premise plugins directory to continue with the transfer process.
Step-1: Download the dataTransfer JAR file from here (https://releases.jfrog.io/artifactory/jfrog-releases/data-transfer/[RELEASE]/lib/data-transfer.jar) and add it under $JFROG_HOME/artifactory/var/etc/artifactory/plugins/lib/. If the "lib" directory is not present, create one.
Step-2: Download the dataTransfer.groovy file from here (https://releases.jfrog.io/artifactory/jfrog-releases/data-transfer/[RELEASE]/dataTransfer.groovy) and add it under $JFROG_HOME/artifactory/var/etc/artifactory/plugins/.
Step-3: Reload the plugin using the following command. curl -u admin -X POST http://localhost:8082/artifactory/api/plugins/reload
If the plugin is loaded successfully, source instance is all set to proceed with the configuration transfer.
Warning
The following process will wipe out the entire configuration of the target instance, and replace it with the configuration of the source instance. This includes repositories and users.
Install JFrog CLI on the source instance machine as described here.
Configure the connection details of the source Artifactory instance with your admin credentials by running the following command from the terminal.
Configure the connection details of the target Artifactory server with your admin credentials by running the following command from the terminal.
Run the following command to verify that the target URLs of all the remote repositories are accessible from the target.
If the command output shows that a target URL isn't accessible for any of the repositories, you'll need to make the URL accessible before proceeding to transfer the config. You can then rerun the command to ensure that the URLs are accessible.
If the command execution fails with an error indicating that the configuration import failed against the target server due to some existing data, before using the --force flag to override it, consider reviewing the configuration present in the cloud instance to ensure if it's safe to override. If you would like to preserve the existing configuration in cloud instance whilst transferring the additional data from on-premise, refer to the link here (https://docs.jfrog-applications.jfrog.io/jfrog-applications/jfrog-cli/cli-for-jfrog-cloud-transfer#transferring-projects-and-repositories-from-multiple-source-instances). This section describes a merge task instead of transfer, to sync the data between the instances.
NOTE: Users will not be transferred while executing merge. Only Repositories and Projects will be merged with the cloud instance.
Note
The following process will wipe out the entire configuration of the target instance, and replace it with the configuration of the source instance. This includes repositories and users.
Transfer the configuration from the source to the target by running the following command.
This command might take up to two minutes to run.
Note
By default, the command will not transfer the configuration if it finds that the target instance isn't empty. This can happen for example if you ran the transfer-config command before. If you'd like to force the command run anyway, and overwrite the existing configuration on the target, run the command with the
--force
option.In case you do not wish to transfer all repositories, you can use the
--include-repos
and--exclude-repos
command options. Run the following command to see the usage of these options.jf rt transfer-config -h
Troubleshooting
Did you encounter the following error when running the command?
This error commonly occurs on Red Hat Enterprise Linux (RHEL) and CentOS platforms. The issue arises because the CLI process expects the temporary directory (/tmp) to be owned by the artifactory user, even when the process is run by root. To resolve this issue, follow these steps:
Create a new directory named tmp in your home directory:
Assign ownership of the new tmp directory to the artifactory user and group:
Inform JFrog CLI to use the new temporary directory by setting the JFROG_CLI_TEMP_DIR environment variable:
Execute the transfer-config command again
View the command output in the terminal to verify that there are no errors. The command output is divided into the following four phases:
The target instance should now be accessible with the admin credentials of the source instance. Log into the target instance UI. The target instance must have the same repositories as the source.
Once the configuration transfer is successful, you need to disable the configuration transfer on the target instance. This is important both for security reasons and the target server is set to be low on resources while configuration transfer is enabled.
Login to MyJFrog
Under the Actions menu, choose Enable Configuration Transfer.
Toggle Enable Configuration Transfer to off to disable configuration transfer.
Disabling the configuration transfer might take some time.
Before initiating the file transfer process, we highly recommend running pre-checks, to identify issues that can affect the transfer. You trigger the pre-checks by running a JFrog CLI command on your terminal. The pre-checks will verify the following:
There's network connectivity between the source and target instances.
The source instance does not include artifacts with properties with values longer than 2.4K characters. This is important, because values longer than 2.4K characters are not supported in JFrog Cloud, and those properties are skipped during the transfer process.
To run the pre-checks, follow these steps:
Install JFrog CLI on any machine that has access to both the source and the target JFrog instances. To do this, follow the steps described here.
Run the following command:
Initiating File Transfer: Run the following command to start pushing the files from all the repositories in the source instance to the target instance.
If you're running the command in the background, you use the following command to view the transfer progress.
Note
In case you do not wish to transfer the files from all repositories, or wish to run the transfer in phases, you can use the
--include-repos
and--exclude-repos
command options. Run the following command to see the usage of these options.
If the traffic between the source and target instance needs to be routed through an HTTPS proxy, refer to this section.
You can stop the transfer process by hitting on CTRL+C if the process is running in the foreground, or by running the following command if you're running the process in the background.
The process will continue from the point it stopped when you re-run the command.
While the file transfer is running, monitor the load on your source instance, and if needed, reduce the transfer speed or increase it for better performance. For more information, see the Controlling the file transfer speed.
A path to an errors summary file will be printed at the end of the run, referring to a generated CSV file. Each line on the summary CSV represents an error log of a file that failed to be transferred. On subsequent executions of the jf rt transfer-files
command, JFrog CLI will attempt to transfer these files again.
Once thejf rt transfer-files
command finishes transferring the files, you can run it again to transfer files that were created or modified during the transfer. You can run the command as many times as needed. Subsequent executions of the command will also attempt to transfer files that failed to be transferred during previous executions of the command.
Note
Read more about how the transfer files works in this section.
You have the option to sync the configuration between the source and target after the file transfer process is complete. You may want to so this if new config entities, such as projects, repositories, or users were created or modified on the source, while the files transfer process has been running. To do this, simply repeat steps 1-3 above.
Transferring files larger than 25GB: By default, files that are larger than 25 GB will be blocked by the JFrog Cloud infrastructure during the file transfer. To check whether your source Artifactory instance hosts files larger than that size, do the following. Run the following curl command from your terminal, after replacing the <source instance URL>
, <username>
and <password>
tokens with your source instance details. The command execution may take a few minutes, depending on the number of files hosted by Artifactory.
You should get a result that looks like the following.
The value of size represents the largest file size hosted by your source Artifactory instance.
If the size value you received is larger than 25000000000, please avoid initiating the files transfer before contacting JFrog Support, to check whether this size limit can be increased for you. You can contact Support by sending an email to support@jfrog.com
Routing the traffic from the source to the target through an HTTPS proxy: The jf rt transfer-files
command pushes the files directly from the source to the target instance over the network. In case the traffic from the source instance needs to be routed through an HTTPS proxy, follow these steps.
a. Define the proxy details in the source instance UI as described in the Managing ProxiesManaging Proxies documentation. b. When running the jf rt transfer-files
command, add the --proxy-key
option to the command, with Proxy Key you configured in the UI as the option value. For example, if the Proxy Key you configured is my-proxy-key, run the command as follows:
The jf rt transfer-config command transfers all the config entities (users, groups, projects, repositories, and more) from the source to the target instance. While doing so, the existing configuration on the target is deleted and replaced with the new configuration from the source. If you'd like to transfer the projects and repositories from multiple source instances to a single target instance, while preserving the existing configuration on the target, follow the below steps.
Note
These steps trigger the transfer of the projects and repositories only. Other configuration entities like users are currently not supported.
Ensure that you have admin access tokens for both the source and target instances. You'll have to use an admin access token and not an Admin username and password.
Install JFrog CLI on any machine that has access to both the source and the target instances using the steps described here. Make sure to use the admin access tokens and not an admin username and password when configuring the connection details of the source and the target.
Run the following command to merge all the projects and repositories from the source to the target instance.
Note
In case you do not wish to transfer the files from all projects or the repositories, or wish to run the transfer in phases, you can use the
--include-projects, --exclude-projects, --include-repos
and--exclude-repos
command options. Run the following command to see the usage of these options.
The jf rt transfer-files
command pushes the files from the source instance to the target instance as follows:
The files are pushed for each repository, one by one in sequence.
For each repository, the process includes the following three phases:
Phase 1 pushes all the files in the repository to the target.
Phase 2 pushes files that have been created or modified after phase 1 started running (diffs).
Phase 3 attempts to push files that failed to be transferred in earlier phases (Phase 1 or Phase 2) or in previous executions of the command.
If Phase 1 finished running for a specific repository, and you run the jf rt transfer-files
command again, only Phase 2 and Phase 3 will be triggered. You can run the jf rt transfer-files
as many times as needed, till you are ready to move your traffic to the target instance permanently. In any subsequent run of the command, Phase 2 will transfer the newly created and modified files, and Phase 3 will retry transferring files that failed to be transferred in previous phases and also in previous runs of the command.
To achieve this, JFrog CLI stores the current state of the file transfer process in a directory named transfer
under the JFrog CLI home directory. You can usually find this directory at this location ~/.jfrog/transfer
.
JFrog CLI uses the state stored in this directory to avoid repeating transfer actions performed in previous executions of the command. For example, once Phase 1 is completed for a specific repository, subsequent executions of the command will skip Phase 1 and run Phase 2 and Phase 3 only.
In case you'd like to ignore the stored state, and restart the file transfer from scratch, you can add the --ignore-state
option to the jf rt transfer-files
command.
Unlike the transfer-config command, which should be run from the primary node machines of Artifactory, it is recommended to run the transfer-files command from a machine that has network access to the source Artifactory URL. This allows the spreading the transfer load on all the Artifactory cluster nodes. This machine should also have network access to the target Artifactory URL.
Follow these steps to install JFrog CLI on that machine.
Install JFrog CLI by using one of the JFrog CLI Installers. For example:
If your source instance is accessible only through an HTTP/HTTPS proxy, set the proxy environment variable as described here.
Configure the connection details of the source Artifactory instance with your admin credentials. Run the following command and follow the instructions.
Configure the connection details of the target Artifactory instance.
Install JFrog CLI on your source instance by using one of the JFrog CLI Installers. For example:
Note
If the source instance is running as a docker container, and you're not able to install JFrog CLI while inside the container, follow these steps.
Connect to the host machine through the terminal.
Download the JFrog CLI executable into the correct directory by running this command.
Copy the JFrog CLI executable you've just downloaded to the container, by running the following docker command. Make sure to replace<the container name>it
with the name of the container.
Connect to the container and run the following command to ensure JFrog CLI is installed
The jf rt transfer-files
command pushes the binaries from the source instance to the target instance. This transfer can take days, depending on the size of the total data transferred, the network bandwidth between the source and the target instance, and additional factors.
Since the process is expected to run while the source instance is still being used, monitor the instance to ensure that the transfer does not add too much load to it. Also, you might decide to increase the load for faster transfer while you monitor the transfer. This section describes how to control the file transfer speed.
By default, the jf rt transfer-files
command uses 8 working threads to push files from the source instance to the target instance. Reducing this value will cause slower transfer speed and lower load on the source instance, and increasing it will do the opposite. We therefore recommend increasing it gradually. This value can be changed while the jf rt transfer-files
command is running. There's no need to stop the process to change the total of working threads. The new value set will be cached by JFrog CLI and also used for subsequent runs from the same machine. To set the value, simply run the following interactive command from a new terminal window on the same machine that runs the jf rt transfer-files
command.
When your self-hosted Artifactory hosts hundreds of terabytes of binaries, you may consult with your JFrog account manager about the option of reducing the file transfer time by manually copying the entire filestore to the JFrog Cloud storage. This reduces the transfer time because the binaries' content does not need to be transferred over the network.
The jf rt transfer-files
command transfers the metadata of the binaries to the database (file paths, file names, properties, and statistics). The command also transfers the binaries that have been created and modified after you copy the filestore.
To run the file transfer after you copy the filestore, add the --filestore
command option to the jf rt transfer-files
command.
To help reduce the time it takes for Phase 2 to run, you may configure Event-Based Push Replication for some or all of the local repositories on the source instance. With Replication configured, when files are created or updated on the source repository, they are immediately replicated to the corresponding repository on the target instance. Repository Replication
The replication can be configured at any time. Before, during, or after the file transfer process.
Why is the total file count on my source and target instances different after the files transfer finishes?
It is expected to see sometimes significant differences between the files count on the source and target instances after the transfer ends. These differences can be caused by many reasons, and in most cases are not an indication of an issue. For example, Artifactory may include file cleanup policies that are triggered by the file deployment. This can cause some files to be cleaned up from the target repository after they are transferred.
How can I validate that all files were transferred from the source to the target instance?
There's actually no need to validate that all files were transferred at the end of the transfer process. JFrog CLI performs this validation for you while the process is running. It does that as follows.
JFrog CLI traverses the repositories on the source instance and pushes all files to the target instance.
If a file fails to reach the target instance or isn't deployed there successfully, the source instance logs this error with the file details.
At the end of the transfer process, JFrog CLI provides you with a summary of all files that failed to be pushed.
The failures are also logged inside the transfer
directory under the JFrog CLI home directory. This directory is usually located at ~/.jfrog/transfer
. Subsequent runs of the jf rt transfer-files
command use this information for attempting another transfer of the files.
Does JFrog CLI validate the integrity of files, after they are transferred to the target instance?
Yes. The source Artifactory instance stores a checksum for every file it hosts. When files are transferred to the target instance, they are transferred with the checksums as HTTP headers. The target instance calculates the checksum for each file it receives and then compares it to the received checksum. If the checksums don't match, the target reports this to the source, which will attempt to transfer the file again at a later stage of the process.
Can I stop the jf rt transfer-files command and then start it again? Would that cause any issues?
You can stop the command at any time by hitting CTRL+C and then run it again. JFrog CLI stores the state of the transfer process in the "transfer" directory under the JFrog CLI home directory. This directory is usually located at ~/.jfrog/transfer
. Subsequent executions of the command use the data stored in that directory to try and avoid transferring files that have already been transferred in previous command executions.
Basic
Software Composition Analysis (SCA)
Scans your project dependencies for security issues and shows you which dependencies are vulnerable. If the vulnerabilities have a fix, you can upgrade to the version with the fix in a click of a button.
CVE Research and Enrichment
or selected security issues, get leverage-enhanced CVE data that is provided by our JFrog Security Research team. Prioritize the CVEs based on:
JFrog Severity: The severity given by the JFrog Security Research team after the manual analysis of the CVE by the team. CVEs with the highest JFrog security severity are the most likely to be used by real-world attackers. This means that you should put effort into fixing them as soon as possible.
Research Summary: The summary that is based on JFrog's security analysis of the security issue provides detailed technical information on the specific conditions for the CVE to be applicable.
Remediation: Detailed fix and mitigation options for the CVEs
Advanced
CVEs Contextual Analysis
Uses the code context to eliminate false positive reports on vulnerable dependencies that are not applicable to the code. CVEs Contextual Analysis is currently supported for Python, Java and JavaScript code.
Secrets Detection
Prevents the exposure of keys or credentials that are stored in your source code.
Infrastructure as Code (IaC) Scan
Secures your IaC files. Critical to keeping your cloud deployment safe and secure.
Additional Perks
Security issues are easily visible inline.
The results show issues with context, impact, and remediation.
View all security issues in one place, in the JFrog tab.
For Security issues with an available fixed version, you can upgrade to the fixed version within the plugin.
Track the status of the code while it is being built, tested, and scanned on the CI server.
The JFrog Plugin supports the following IDEs:
IntelliJ IDEA
WebStorm
PyCharm
Android Studio
GoLand
Rider
CLion
Update a vulnerable direct dependency to a fixed version directly from the vulnerable location at the editor using the quick fix.
The JFrog extension incorporates a file tree displaying all the vulnerabilities within the project. Each file that is infected with a vulnerability appears as a tree node.
Descriptor file (e.g., pom.xml in Maven, go.mod in Go, etc.) has a special meaning that outlines the available direct dependencies for the project. The tree will show these descriptor files containing vulnerable dependencies. In cases where a direct dependency contains vulnerable child dependencies, the tree will show the vulnerable child dependencies instead, denoting them with a '(indirect)' postfix.
Furthermore, various types of vulnerability nodes, such as Contextual Analysis Vulnerabilities or hard-coded secrets, may be present in other source code files.
Each file node in the tree is interactive, click and expand it to view its children node and navigate to the corresponding file in the IDE for better visibility. Upon navigating to a file, the extension will highlight the vulnerable line, making it easier to locate the specific issue
In addition, the locations with vulnerabilities will be marked in the editor. By clicking on the light bulb icon next to a vulnerable location in the editor, we can instantly jump to the corresponding entry in the tree view.
Clicking on a CVE in the list will open the location with the issue in the editor and a vulnerability details view. This view contains information about the vulnerability, the vulnerable component, fixed versions, impact paths, and much more.
NOTES:
From JFrog Xray version 1.9 to 2.x, IntelliJ IDEA users connecting to Xray from IntelliJ are required to be granted the ‘View Components’ action in Xray.
After the JFrog Plugin is installed, a new JFrog panel is added at the bottom of the screen. Opening the JFrog panel displays two views:
The Local view displays information about the local code as it is being developed in the IDE. You can continuously scan your project locally. The information is displayed in the Local view.
The CI view allows the tracking of the code as it is built, tested and scanned by the CI server. It displays information about the status of the build and includes a link to the build log on the CI server.
The JFrog Plugin enables continuous scans of your project with the JFrog Platform. The security-related information will be displayed under the Local view. It allows developers to view vulnerability information about their dependencies and source code in their IDE. With this information, a developer can make an informed decision on whether to use a component or not before it gets entrenched into the organization’s product.
Scan your project by clicking the Run Scan button. After the scan is done, a list of vulnerable files will appear.
Each descriptor file (like pom.xml in Maven, go.mod in Go, etc.) displayed in the JFrog Panel contains vulnerable dependencies, and each dependency contains the vulnerabilities themselves.
By right-clicking on a dependency line, you can jump to the dependency's declaration in the descriptor file or have the dependency upgraded to a version with a fix.
NOTE: Creating Ignore Rules is only available_when a JFrog Project or Watch is defined.
Clicking a vulnerability in the list will open the vulnerability details view. This view contains information about the vulnerability, the vulnerable component, fixed versions, impact paths and much more.
Requires Xray version 3.66.5 or above and Enterprise X / Enterprise+ subscription with Advanced DevSecOps.
Xray automatically validates some high and very high-impact vulnerabilities, such as vulnerabilities that have prerequisites for exploitations, and provides contextual analysis information for these vulnerabilities, to assist you in figuring out which vulnerabilities need to be fixed.
CVEs Contextual Analysis data includes:
Contextual Analysis status: Contextual Analysis results indicate if a CVE was found applicable in your application or not applicable.
Contextual Analysis breakdown: An explanation provided by our research team as to why the CVE was found applicable or not applicable.
Remediation: Contextual mitigation steps and options provided by our research team that assist you with remediating the issues.
Requires Xray version 3.66.5 or above and Enterprise X / Enterprise+ subscription with Advanced DevSecOps.
Detect any secrets left exposed inside the code. to prevent any accidental leak of internal tokens or credentials.
NOTE: To ignore detected secrets, you can add a comment which includes the phrase jfrog-ignore above the line with the secret.
Requires Xray version 3.66.5 or above and Enterprise X / Enterprise+ subscription with Advanced DevSecOps.
Scan Infrastructure as Code (Terraform) files for early detection of cloud and infrastructure misconfigurations.
The icon demonstrates the top severity issue of a selected component and its transitive dependencies. The following table describes the severities from highest to lowest:
The JFrog Plugin allows you to view information about your builds directly from your CI system. This allows developers to keep track of the status of their code, while it is being built, tested, and scanned as part of the CI pipeline, regardless of the CI provider used.
This information can be viewed inside a Jetbrains IDE, from the JFrog Panel, under the CI tab.
The following details can be made available in the CI view:
Status of the build run (passed or failed)
Build run start time
Git branch and latest commit message
Link to the CI run log
Security information about the build artifacts and dependencies
Next, follow these steps:
Under Settings (Preferences) | Other Settings, click JFrog Global Configuration. configure the JFrog Platform URL and the user you created.
Click Apply and open the CI tab under the JFrog panel at the bottom of the screen and click the Refresh button.
The JFrog Eclipse plugin adds JFrog Xray scanning of Maven, Gradle, and npm project dependencies to your Eclipse IDE. It allows developers to view panels displaying vulnerability information about the components and their dependencies directly in their Eclipse IDE. With this information, a developer can make an informed decision on whether to use a component or not before it gets entrenched into the organization’s product.
The plugin filter allows you view the scanned results according to issues or licenses.
JFrog Xray version 1.7.2.3 and above.
To access the plugin configuration, click on the gear icon:
Login to the JFrog Platform UI, with a user who has admin permissions.
Create a Remote Repository with the following properties set:
Under the Basic
tab:
Package Type: Generic
Repository Key: jfrog-releases-repository
Under the Advanced
tab:
Uncheck the Store Artifacts Locally
option
Navigate to the Advanced tab within JFrog Global Configuration
Click on Download resources through Artifactory
Insert the Repository Key you created in the Repository key text field
The JFrog Plugin uses the IDE log files. By default, the log level used by the plugin is INFO.
You have the option of increasing the log level to DEBUG. Here's how to do it:
Go to Help | Diagnostic Tools | Debug Log Settings...
Inside the Custom Debug Log Configuration window add the following line:
We welcome community contributions through pull requests. To help us improve this project, please read our Contribution guide.
The JFrog Plugin uses JCEF (Java Chromium Embedded Framework) to create a webview component in the plugin's tool window.
Most IntelliJ-based IDEs use a boot runtime that contains JCEF by default.
Android Studio and some older versions of other IntelliJ-based IDEs use a boot runtime that doesn't contain JCEF by default, and therefore the plugin can't be loaded in them.
You can configure the JFrog Plugin to use the security policies you create in Xray. Policies enable you to create a set of rules, in which each rule defines security criteria, with a corresponding set of automatic actions according to your needs. Policies are enforced when applying them to Watches.
If you'd like to use a JFrog Project that is associated with the policy, follow these steps:
Configure your Project key in the plugin settings: under Settings (Preferences) | Other Settings, click JFrog Global Configuration and go to the Settings tab.
If however your policies are referenced through Xray Watches, follow these steps instead:
Configure your Watches in the plugin settings: under Settings (Preferences) | Other Settings, click JFrog Global Configuration and go to the Settings tab.
To install and work with the plugin:
Install the JFrog plugin
Configure the plugin to connect to JFrog Xray
Scan and view the results
Filter Xray Scanned Results
Install the JFrog Eclipse IDE Plugin
Download the plugin zip.
Go to Help | Install New Software,click Add and then click Archive.
Choose the plugin zip file you downloaded and click Add.
Click Next.
This section describes how to configure the JFrog Eclipse IDE Plugin and reviews connecting to JFrog Xray and Scanning Gradel projects with the Plugin. It reviews the following:
Connect to JFrog Xray [315]
Scan Gradle Projects with the JFrog Eclipse IDE Plugin [316]
Connect to JFrog Xray
Once the plugin is successfully installed, connect the plugin to your instance of JFrog Xray.
Go to Eclipse (Preferences), click JFrog Xray.
Set your JFrog platform URL and login credentials.
Test your connection to Xray using the Test Connection button.
scan your workspace by clicking the Scan/Rescan button, the icon at the extension tab or click on Start Xray Scan from within the editor. The scan will create a list of files with vulnerabilities in the workspace.
After your builds were fetched from Artifactory, press on the Builds button to choose what build to display.
Under the Transfer Artifactory Configuration from Self-Hosted to Cloud section, click on the acknowledgment checkbox. You cannot enable configuration transfer until you select the checkbox.
Toggle Enable Configuration Transfer to enable the transfer. The process may take a few minutes to complete.
View the log to verify there are no errors.
This command may take a few days to push all the files, depending on your system size and your network speed. While the command is running, It displays the transfer progress visually inside the terminal.
The plugin allows developers to find and fix security vulnerabilities in their projects and to see valuable information about the status of their code by continuously scanning it locally with .
You can learn more about enriched CVEs .
Check out what our research team is up to and stay updated on newly discovered issues by clicking on this link:
Requires Xray version 3.66.5 or above and Enterprise X / Enterprise+ subscription with .
Install the JFrog Plugin via the Plugins tab in the IDE settings, or in .
.
using the plugin.
If Xray watches are used, on an icon vulnerability line a closed eye icon will appear by clicking on it you can create an in Xray.
Install the JFrog IntelliJ IDEA Plugin via the Plugins tab in the IDE settings, or in .
Check out what our research team is up to and stay updated on newly discovered issues by clicking on this .
If your JFrog Platform instance uses a domain with a self-signed certificate, add the certificate to IDEA as described .
From JFrog Xray version 3.x, as part of the JFrog Platform, IntelliJ IDEA users connecting to Xray from IntelliJ require ‘Read’ permission. For more information, see .
You can also create an in Xray.
The CI information displayed in IDEA is pulled by the JFrog IDEA Plugin directly from JFrog Artifactory. This information is stored in Artifactory as part of the build-info, which is published to Artifactory by the CI server. Read more about build-info in the documentation page. If the CI pipeline is also configured to scan the build-info by JFrog Xray, the JFrog IDEA Plugin will pull the results of the scan from JFrog Xray and display them in the CI view as well.
Set up your CI pipeline to expose information, so that it is visible in IDEA as described .
Under Settings (Preferences) | Other Settings, click JFrog CI Integration. Set your CI build name in the Build name pattern field. This is the name of the build published to Artifactory by your CI pipeline. You have the option of setting * to view all the builds published to Artifactory.
Source Code: The JFrog Eclipse Plugin code is .
The JFrog IDEA Plugin requires necessary resources for scanning your projects. By default, the JFrog IDEA Plugin downloads the resources it requires from . If the machine running IDEA has no access to it, follow these steps to allow the resources to be downloaded through an Artifactory instance, which the machine has access to:
URL:
To see the IDE log file, depending on the IDE version and OS as described , go to Help | Show/reveal Log in Explorer/finder/Konqueror/Nautilus.
Please report issues by opening an issue on .
To solve this issue, open the dialog where you can change the boot runtime to one that contains JCEF.
The release notes are available on .
Create a , or obtain the relevant JFrog Project key.
Create a on JFrog Xray.
Create a on JFrog Xray and assign your Policy and Project as resources to it.
Create one or more on JFrog Xray.
If JFrog Xray is behind an HTTP proxy, configure the proxy settings as described . This is supported since version 1.1.0 of the JFrog Eclipse Plugin.
Version
Download link
Compatibility
1.2.0
Eclipse 4.13 - 4.33
1.1.2
Eclipse 4.13 - 4.20
1.1.1
Eclipse 4.10 - 4.19
SCA
✅
✅
✅
✅
✅
✅
✅
✅
✅
❌
✅
✅
❌
Contextual Analysis
✅
✅
✅
✅
✅
✅
✅
✅
✅
❌
✅
✅
❌
Secrets Detection
✅
✅
✅
✅
✅
✅
✅
✅
✅
✅
✅
✅
✅
SAST
✅
✅
✅
✅
✅
✅
✅
✅
✅
❌
✅
✅
✅
Infrastructure as Code (IaC)
❌
❌
❌
❌
❌
❌
❌
❌
❌
✅
❌
❌
❌
PR Scan
✅
✅
✅
✅
✅
✅
✅
✅
✅
✅
✅
✅
✅
Monitor Scan
✅
✅
✅
✅
✅
✅
✅
✅
✅
✅
✅
✅
✅
Autofix with new PR for direct dep.
✅
✅
✅
✅
✅
✅
✅
✅
✅
❌
✅
✅
❌
License Violations
✅
✅
✅
✅
✅
✅
✅
✅
✅
❌
✅
✅
❌
Here you can find the full template for Frogbot repository scan workflow:
Here you can find the full template for Frogbot pull request scan workflow:
Critical
High
Medium
Low
Unknown
Not Applicable
This section reviews how to use the JFrog Visual Studio extension including Scan and View results, Filter Scanned Results, Troubleshooting and Reporting issues.
To scan and view the project dependencies, open View | Other Windows | JFrog
JFrog Xray automatically performs a scan when the project is opened or when clicking on the Refresh button in the JFrog window.
The JFrog Extension provides a filter to narrow down the scanned results to view exactly what you need.
When troubleshooting issues, it os recommended to look at the log messages in the Output console, located at the bottom of the screen.
Please report issues by opening an issue on Github.
Behind the scenes, the JFrog plugin executes a Gradle script, which creates the dependencies tree of the project. The plugin reads the Gradle configuration defined in Eclipse. This configuration is added to Eclipse by the Buildship plugin You can access this configuration by going in Preferences | Gradle | Gradle distribution
If the Gradle configuration is not set, then Gradle Wrapper will be used. If the project does not include the Gradle Wrapper configuration, Gradle will be automatically downloaded.
This section reviews how to setup and install the JFrog Visual Studio Extension. It lists the supported visual studio versions, and instructions on installing and configuring the extension. It reviews the following:
Supported Visual Studio Versions
Install the JFrog Visual Studio Extension
Configure the JFrog Visual Studio Extension to Connect to JFrog Xray
Use the JFrog Visual Studio Extension
Two extensions are shared to the marketplace - each of them supports a different Visual Studio version:
Prerequisites
Frog Xray version 2.5.0 and above.
To install and work with the extension:
Open the terminal window.
Run the nuget command. If it is not recognized as a command, please add nuget.exe to the PATH environment variable.
If your projects use NPM, Run the npm command. If it is not recognized as a command, please add npm.exe to the PATHenvironment variable.
Open Visual Studio
Go to Tools | Extensions and Updates
Search for JFrog.
Click on Download
Once the installation is completed, re-open Visual Studio.
Once the extension is successfully installed, connect Visual Studio to your instance of JFrog Xray.
Set your JFrog Platform URL and login credentials.
Test your connection to Xray using the Test connection button.
The JFrog Visual Studio Extension adds JFrog Xray scanning of NuGet project dependencies to your Visual Studio IDE. It allows developers to view panels displaying vulnerability information about the components and their dependencies directly in Visual Studio. With this information, a developer can make an informed decision on whether to use a component or not before it gets entrenched into the organization’s product.
The extension filter allows you view the scanned results according to issues severity.
Working In Visual Studio Code?
Take a look at the user documentation for the JFrog Visual Studio Code Extension.
Source Code:
The JFrog Visual Studio Extension code is available on Github.
JFrog Frogbot is a Git bot that scans your Git repositories for security vulnerabilities.
It scans pull requests immediately after they are opened but before they are merged. This process notifies you if the pull request is about to introduce new vulnerabilities to your code. This unique capability ensures the code is scanned and can be fixed even before vulnerabilities are introduced into the codebase.
It scans the Git repository periodically and creates pull requests with fixes for detected vulnerabilities.
Software Composition Analysis (SCA): Scan your project dependencies for security issues. For selected security issues, get leverage-enhanced CVE data from our JFrog Security Research team. Frogbot uses JFrog's vast vulnerabilities database, to which we continuously add new component vulnerability data.
Validate Dependency Licenses: Ensure that the licenses for the project's dependencies are in compliance with a predefined list of approved licenses.
Static Application Security Testing (SAST): Provides fast and accurate security-focused engines that detect zero-day security vulnerabilities on your source code sensitive operations, while minimizing false positives.
CVE Vulnerability Contextual Analysis: This feature uses the code context to eliminate false positive reports on vulnerable dependencies that are not applicable to the code. For CVE vulnerabilities that are applicable to your code, Frogbot will create pull request comments on the relevant code lines with full descriptions regarding the security issues caused by the CVE. Vulnerability Contextual Analysis is currently supported for Python, JavaScript, and Java code.
Secrets Detection: Detect any secrets left exposed inside the code. to stop any accidental leak of internal tokens or credentials.
Infrastructure as Code scans (IaC): Scan Infrastructure as Code (Terraform) files for early detection of cloud and infrastructure misconfigurations.
NOTE: SAST, Vulnerability Contextual Analysis, Secrets Detection and Infrastructure as Code scans require the JFrog Advanced Security Package.
Azure Repos
Bitbucket Server
GitHub
GitLab
Authenticating using OpenID Connect (OIDC)
The sensitive connection details, such as the access token used by JFrog Frogbot, can be automatically generated by the action instead of storing it as a secret in GitHub. This is made possible by leveraging the OpenID-Connect (OIDC) protocol. This protocol can authenticate the workflow issuer and supply a valid access token. Learn more about this integration in this blog post. To utilize the OIDC protocol, follow these steps:
Configure an OIDC Integration: This phase sets an integration between GitHub Actions to the JFrog platform.
Navigate to the Administration tab In the JFrog Platform UI
Click General
| Manage Integrations
The 'Provider Name' value should be used as the 'oidc-provider-name' input in Workflow Configuration step 2 below.
The 'Audience' field does NOT represent the 'aud' claim that can be added to identity-mapping configured in the 'Claims JSON' (shown below). Only claims that are included in the 'Claims Json' created during step 2 will be validated.
Configure an identity mapping: This phase sets an integration between a particular GitHub repository to the JFrog platform.
You have the flexibility to define any valid list of claims required for request authentication. You can check a list of the possible claims here. Example Claims JSON:
Set required permissions: In the course of the protocol's execution, it's imperative to acquire a JSON Web Token (JWT) from GitHub's OIDC provider. To request this token, it's essential to configure the specified permission in the workflow file:
Pass the 'oidc-provider-name' input to the Action (Required): The 'oidc-provider-name' parameter designates the OIDC configuration whose one of its identity mapping should align with the generated JWT claims. This input needs to align with the 'Provider Name' value established within the OIDC configuration in the JFrog Platform.
Pass the 'oidc-audience' input to the Action (Optional): The 'oidc-audience' input defines the intended recipients of an ID token (JWT), ensuring access is restricted to authorized recipients for the JFrog Platform. By default, it contains the URL of the GitHub repository owner. It enforces a condition, allowing only workflows within the designated repository/organization to request an access token. Read more about it here.
When using OIDC integration, you might encounter failures in Xray scans or JFrog Advanced Security scans due to token expiration. If this occurs, try extending the 'Token Expiration Time' in the 'Identity Mapping Configuration' phase to ensure the token remains valid until all scanners are triggered, which may vary depending on the project's size.
Make sure you have the connection details of your JFrog Platform.
In the pipelines.yml, make sure to set values for all the mandatory variables.
In the pipelines.yml, if you're using a Windows agent, modify the code inside the onExecute sections as described in the template comments.
Ensure that the JFrog Pipelines agent has the necessary package managers installed. For example, if the project utilizes npm, it is crucial to have the npm client installed on the agent.
Install Frogbot on GitLab repositories using GitLab CI
Make sure you have the connection details of your JFrog environment.
Go to your GitLab repository settings page and save the JFrog connection details as repository secrets with the following names - JF_URL, JF_USER, and JF_PASSWORD.
NOTE:
You can use JF_XRAY_URL and JF_ARTIFACTORY_URL instead of JF_URL.
You can use JF_ACCESS_TOKEN instead of JF_USER and JF_PASSWORD.
Ensure not set these tokens as protected in Gitlab.
Add a job named frogbot-scan to your .gitlab-ci.yml
file in your GitLab repository. Use the following for execution:
In the gitlab-ci.yml
file, Make sure that either JF_USER and JF_PASSWORD or JF_ACCESS_TOKEN are set, but not both.
Frogbot scans your Git repositories periodically and automatically creates pull requests for upgrading vulnerable dependencies to a version with a fix.
NOTE: The pull request fix is presently unavailable for older NuGet projects that use the package.config file instead of the PackageReference syntax.
IntelliJ
WebStorm
PyCharm
Android Studio
GoLand
Click New Integration
| OpenID Connect
:
Configure the OIDC integration:
An identity mapping is a configuration object utilized by the JFrog Platform to associate incoming OIDC claims with particular selected fields. These fields might include repository
, actor
, workflow
, and others. To configure the identity mapping, click on the identity mapping created in section 1 and then click on Add Identity Mapping
. In the 'priority' field insert the value '1' and fill in the rest of the required fields:
Important Notice: For Scanning Pull Requests, it is advisable to refrain from setting up Frogbot using JFrog Pipelines for open source projects. For further details, please refer to the .
Inside JFrog Pipelines, save the JFrog connection details as a named jfrogPlatform.
Inside JFrog Pipelines, save your Git access token in a named gitIntegration.
Create a pipelines.yml file using one of the and push the file into one of your Git repositories, under a directory named .jfrog-pipelines
.
For more advanced configuration, use to see all available options.
After you create a new pull request, Frogbot will automatically scan it.
NOTE: The scan output will include only new vulnerabilities added by the pull request. Vulnerabilities that aren't new, and existed in the code before the pull request was created, will not be included in the report. In order to include all the vulnerabilities in the report, including older ones that weren't added by this PR, use the includeAllVulnerabilities parameter in the frogbot-config.yml file.
The Frogbot scan on Bitbucket Server workflow:
The developer opens a pull request.
Frogbot scans the pull request and adds a comment with the scan results.
Frogbot can be triggered again following new commits, by adding a comment with the rescan
text.
Frogbot uses JFrog Xray (version 3.29.0 and above is required) to scan your pull requests. It adds the scan results as a comment on the pull request. If no new vulnerabilities are found, Frogbot will also add a comment, confirming this.
The following features use the package manager used for building the project:
Software Composition Analysis (SCA)
Vulnerability Contextual Analysis
When installing Frogbot using JFrog Pipelines, Jenkins, and Azure DevOps, Frogbot will not wait for a maintainer's approval before scanning newly opened pull requests. Using Frogbot with these platforms is therefore not recommended for open-source projects.
When installing Frogbot using GitHub Actions and GitLab however, Frogbot will initiate the scan only after it is approved by a maintainer of the project. The goal of this review is to ensure that external code contributors don't introduce malicious code as part of the pull request. Since this review step is enforced by Frogbot when used with GitHub Actions and GitLab, it is safe to be used for open-source projects.
After you create a new pull request, Frogbot will automatically scan it.
NOTE: The scan output will include only new vulnerabilities added by the pull request. Vulnerabilities that aren't new, and existed in the code before the pull request was created, will not be included in the report. In order to include all the vulnerabilities in the report, including older ones that weren't added by this PR, use the includeAllVulnerabilities parameter in the frogbot-config.yml file.
The Frogbot Azure Repos scan workflow is:
The developer opens a pull request.
Frogbot scans the pull request and adds a comment with the scan results.
Frogbot can be triggered again following new commits, by adding a comment with the rescan
text.
Important Notice: For Scanning Pull Requests, it is advisable to refrain from setting up Frogbot using Azure Pipelines for open source projects. For further details, please refer to the 👮 Security Note for Pull Requests Scanning.
To install Frogbot on Azure Repos repositories, follow these steps.
Make sure you have the connection details of your JFrog environment.
Decide which repository branches you'd like to scan.
Go to your Azure Pipelines project, and add a new pipeline.
Set Azure Repos Git
as your code source.
Select the repository in which the Frogbot pipelines will reside in.
Select Starter Pipeline
and name it frogbot
.
Use the content of the below templates for the pipeline. Edit the remaining mandatory Variables
.
For the pipeline you created, save the JFrog connection details as variables with the following names - JF_URL, JF_USER, and JF_PASSWORD.
NOTE: You can also use JF_XRAY_URL and JF_ARTIFACTORY_URL instead of JF_URL, and JF_ACCESS_TOKEN instead of JF_USER and JF_PASSWORD.
To set the Variables
in the pipeline edit page, click on the Variables
button and set the Variables
.
The Bamboo JFrog Plugin is designed to provide an easy integration between Bamboo and the JFrog Platform.
Unlike the legacy Bamboo Artifactory Plugin, the new Bamboo JFrog Plugin focuses on a single task that runs JFrog CLI commands. Worth mentioning that both JFrog plugins, can work side by side.
The advantage of this approach is that JFrog CLI is a powerful and versatile tool that integrates with all JFrog capabilities. It offers extensive features and functionalities, and it is constantly improved and updated with the latest enhancements from JFrog. This ensures that the Bamboo JFrog Plugin is always up-to-date with the newest features and improvements provided by JFrog.
With the Bamboo JFrog Plugin, you can easily deploy artifacts, resolve dependencies, and link them to the build jobs that created them. Additionally, you can scan your artifacts and builds for vulnerabilities using JFrog Xray and distribute your software packages to remote locations using JFrog Distribution.
Artifact Management: Manage build artifacts with Artifactory.
Dependency Resolution: Resolve dependencies from Artifactory for reliable builds.
Build Traceability: Link artifacts to their corresponding build jobs for better traceability.
Security Scanning: Scan artifacts and builds with JFrog Xray for vulnerabilities.
Software Distribution: Distribute software packages to remote locations using JFrog Distribution.
Download the latest release of the plugin from the Bamboo Marketplace.
Install the plugin on your Bamboo server.
In the Bamboo Administration section, go to Manage Apps and select JFrog Configuration.
Click on New JFrog Platform Configuration.
Configure your credentials details and run a Test Connection, then click Save.
By default, latest JFrog CLI will be installed and used when the JFrog CLI task runs. You can specify a specific version to be used.
If your Bamboo agents have access to the internet, you can set the JFrog Plugin to download JFrog CLI directly from https://releases.jfrog.io. If not, you can set the plugin to download JFrog CLI through the configured Artifactory instance.
Set the Repository Name field value to the name of a Remote or Virtual repository in your Artifactory instance which proxies https://releases.jfrog.io/.
Once installed and configured, you can use the JFrog CLI task in your Bamboo build plans. Follow these steps:
Go to the Tasks section of your build plan.
Add the JFrog CLI task to your plan.
Configure the JFrog CLI task by selecting the appropriate Server ID.
The Maven Artifactory integrates in your build to allow you to do the following:
Resolve artifacts from Artifactory.
Capture the full build information and publish it to Artifactory.
Deploy all build Artifacts to Artifactory.
The Maven Artifactory Plugin coordinates are org.jfrog.buildinfo:artifactory-maven-plugin:x.x.x. It can be viewed on releases.jfrog.io.
A typical build plugin configuration would be as follows:
The plugin's invocation phase is validate by default, and we recommend you don't change it so the plugin is called as early as possible in the lifecycle of your Maven build.
The example above configures the Artifactory publisher, to deploy build artifacts either to the releases or the snapshots repository of Artifactory when mvn deploy
is executed.
However, the Maven Artifactory Plugin provides many other configurations:
<deployProperties>
Specifies properties you can attach to published artifacts. For example: prop-value.
<artifactory>
Specifies whether environment variables are published as part of BuildInfo metadata and which include or exclude patterns are applied when variables are collected
<publisher>
Defines an Artifactory repository where build artifacts should be published using a combination of a <contextUrl>
and <repoKey>/<snapshotRepoKey>
. Build artifacts are deployed if the deploy goal is executed and only after all modules are built.
<buildInfo>
Updates BuildInfo metadata published together with build artifacts. You can configure whether or not BuildInfo metadata is published using the configuration.
<proxy>
Specifies HTTP/S proxy.
Every build server provides its own set of environment variables. You can utilize these variables when configuring the plugin as shown in the following example:
Any plugin configuration value can contain several {{ .. }} expressions. Each expression can contain a single or multiple environment variables or system properties to be used. The expression syntax allows you to provide enough variables to accommodate any build server requirements according to the following rules:
Each expression can contain several variables, separated by a ' | ' character to be used with a configuration value.
The last value in a list is the default that will be used if none of the previous variables is available as an environment variable or a system property.
For example, for the expression {{V1|V2|"defaultValue"}} the plugin will attempt to locate environment variable V1 , then system property V1, then environment variable or system property V2 , and if none of these is available, "defaultValue" will be used.
If the last value is not a string (as denoted by the quotation marks) and the variable cannot be resolved, null will be used (For example, for expression {{V1|V2}} where neither V1 nor V2 can be resolved).
The following project provides a working example of using the plugin: Maven Artifactory Plugin Example.
We welcome pull requests from the community. To help us improve this project, please read our Contribution guide.
The Jenkins JFrog Plugin allows for easy integration between Jenkins and the JFrog Platform. This integration allows your build jobs to deploy artifacts and resolve dependencies to and from Artifactory, and then have them linked to the build job that created them. It also allows you to scan your artifacts and builds with JFrog Xray and distribute your software package to remote locations using JFrog Distribution. This is all achieved by the plugin by wrapping JFrog CLI. Any JFrog CLI command can be executed from within your Jenkins Pipeline job using the JFrog Plugin.
Install the JFrog Plugin by going to Manage Jenkins | Manage Plugins.
Configure your JFrog Platform details by going to Manage Jenkins | Configure System.
Configure JFrog CLI as a tool in Jenkins as described in the Configuring JFrog CLI as a tool section.
To use JFrog CLI in your pipelines jobs, you should configure it as a tool in Jenkins by going to Manage Jenkins | Global Tool Configuration. You can use one of the following installation options:
If your agent has access to the internet, you can set the installer to automatically download JFrog CLI from https://releases.jfrog.io as shown in the below screenshot.
If your agent cannot access the internet, you can set the installer to automatically download JFrog CLI from the JFrog instance you configured in Manage Jenkins | Configure System as shown in the below screenshot. To set this up, follow these steps:
Create a generic remote repository in Artifactory for downloading JFrog CLI. You can name the repository jfrog-cli-remote. This is the name we'll be using here, but you can also choose a different name for the repository. Set the repository URL to https://releases.jfrog.io/artifactory/jfrog-cli/
In Manage Jenkins | Global Tool Configuration select the Install from Artifactory option as shown in the screenshot below.
Set the Server ID of your JFrog instanced, which you configured in Manage Jenkins | Configure System. Also set jfrog-cli-remote as the name of the remote repository you created to download JFrog CLI from. If you used a different name for repository, set this name here.
Install JFrog CLI manually on your build agent, and then set the path to the directory which includes the jf executable, as shown in the below screenshot.
To have your pipeline jobs run JFrog CLI commands, add the following to your pipeline script.
Step 1: Define JFrog CLI as a tool, by using the tool name you configured. For example, if you named the tool jfrog-cli, add the following to the script:
Step 2: Use the jf step to execute any JFrog CLI command as follows:
IMPORTANT: Notice the single quotes wrapping the command right after the jf step definition.
If the JFrog CLI command has arguments with white-spaces, you can provide the arguments as a list as follows:
When the above list syntax is used, the quotes required for the string syntax are replaced with quotes wrapping each item in the list as shown above. The above step is equivalent to the following shell command:
The list syntax also helps avoiding space and escaping problems, when some of those arguments use script variables.
The plugin automatically sets the following environment variables: JFROG_CLI_BUILD_NAME and JFROG_CLI_BUILD_NUMBER with Jenkins's job name and build number respectively. You therefore don't need to specify the build name and build number on any of the build related JFrog CLI commands. If you wish to change the default values, add the following code to your pipeline script:
If you have multiple JFrog Platform instances configured, you can use the –-server-id
command option with the server ID you configured for the instance. For example:
Build-info is the metadata of a build. It includes all the details about the build broken down into segments that include version history, artifacts, project modules, dependencies, and everything that was required to create the build. In short, it is a snapshot of the components used to build your application, collected by the build agent. See below how you publish the build-info from your pipeline jobs. This section should be placed inside the job after the execution of the JFrog CLI commands used for the build.
When the job publishes the build-info to Artifactory, you can access it by clicking on the build-info icon, next to the job run.
To configure this plugin on Jenkins Configuration as Code, add the following sections to the jenkins.yaml:
Configure connection details to the JFrog platform
Add JFrog CLI tool using one of the following methods:
Automatic installation from release.jfrog.io:
Automatic installation from Artifactory:
Manual installation:
These examples demonstrate only a fraction of the capabilities of JFrog CLI. Please refer to the JFrog CLI documentation for additional information.
We welcome pull requests from the community. To help us improve this project, please read our Contribution guide.
For GitHub repositories, issues that are found during Frogbot's repository scans are also added to the Security Alerts view in the UI.
This feature requires:
GitHub code scanning available.
The following alert types are supported:
1. CVEs on vulnerable dependencies
2. Secrets that are exposed in the code
3. Infrastructure as Code (Iac) issues on Terraform packages
4. Static Application Security Testing (Sast) vulnerabilities
5. Validate Allowed Licenses
When Frogbot scans the repository periodically, it checks the licenses of any project dependencies. If Frogbot identifies licenses that are not listed in a predefined set of approved licenses, it adds an alert. The list of allowed licenses is set up as a variable within the Frogbot workflow.
After you create a new pull request, the maintainer of the Git repository can trigger Frogbot to scan the pull request from the pull request UI.
NOTE: The scan output will include only new vulnerabilities added by the pull request. Vulnerabilities that aren't new, and existed in the code before the pull request was created, will not be included in the report. In order to include all the vulnerabilities in the report, including older ones that weren't added by this PR, use the includeAllVulnerabilities parameter in the frogbot-config.yml file.
The Frogbot GitHub scan workflow is:
The developer opens a pull request.
The Frogbot workflow automatically gets triggered and a GitHub environment named frogbot
becomes pending for the maintainer's approval.
The maintainer of the repository reviews the pull request and approves the scan:
Frogbot can be triggered again following new commits, by repeating steps 2 and 3.
After you create a new merge request, the maintainer of the Git repository can trigger Frogbot to scan the merge request from the merge request UI.
NOTE: The scan output will include only new vulnerabilities added by the merge request. Vulnerabilities that aren't new, and existed in the code before the merge request was created, will not be included in the report. In order to include all the vulnerabilities in the report, including older ones that weren't added by this merge request, use the includeAllVulnerabilities parameter in the frogbot-config.yml file.
The Frogbot GitLab flow is as follows:
The developer opens a merge request.
The maintainer of the repository reviews the merge request and approves the scan by triggering the manual frogbot-scan job.
Frogbot is then triggered by the job, it scans the merge request and adds a comment with the scan results.
Frogbot can be triggered again following new commits, by triggering the frogbot-scan job again.
You can show people that your repository is scanned by Frogbot by adding a badge to the README of your Git repository.
You can add this badge by copying the following markdown snippet and pasting it into your repository's README.md file.
This repository includes pipeline templates for GitLab CI, for a quick and easy integration with the JFrog Platform.
The templates use the .setup-jfrog.yml pipeline scripts. The script is included by each of the templates, and sets up the integration between the pipeline and the JFrog Platform.
The script does the following:
Installs JFrog CLI
Configures JFrog CLI to work with the JFrog Platform
Sets the build name and build number values with the values of $CI_PROJECT_PATH_SLUG-$CI_COMMIT_REF_NAME
and $CI_PIPELINE_ID
respectively, to allow publishing build-info to Artifactory
Optionally replaces the default Docker Registry with an Artifactory Docker Registry
Ensure you have the connection details for the JFrog Platform.
Store the JFrog Platform connection details on GitLab
Optionally set the URL of your Artifactory Docker Registry as the value of the JF_DOCKER_REGISTRY variable
Add the setup-jfrog pipeline script in your GitLab pipeline
Store the connection details of your JFrog Platform as GitLab CI/CD variables by using one of the following variables combinations:
JF_URL - Anonymous access (no authentication)
JF_URL + JF_USER + JF_PASSWORD - Basic authentication
JF_URL + JF_ACCESS_TOKEN - Authentication with JFrog Access Token. NOTE: When pulling and pushing docker images from/to Artifactory, the JF_USER variable is also required, in addition to the JF_ACCESS_TOKEN variables
Including the Script
The templates included in this repository already have the setup-jfrog script included as follows:
For Windows agents, use:
You also have the option of downloading the matching script from releases.jfrog.io, adding it to your project, and including it in your pipeline as follows:
You can also include it from one of your projects as follows:
Referencing the Script
Once the script is included in your pipeline, you'll need to reference it from any script
or before_script
sections in the pipeline as shown below:
At the end of your script
, or as part of after_script
, you should add the cleanup reference:
Downloading the setup-jfrog script and JFrog CLI from Artifactory
If your GitLab environment is air-gapped, you would want your pipeline to avoid downloading the setup-jfrog script and also JFrog CLI from https://releases.jfrog.io/artifactory
. Here's how you do this:
As shown in the above Including the Script and Referencing the Script sections, you have the option of copying the setup-jfrog script into your pipeline, and thus avoiding its download. Since the setup-jfrog script downloads JFrog CLI from https://releases.jfrog.io/artifactory
, you should also configure the script to download JFrog CLI from a remote repository in your JFrog Artifactory instance. Follow these steps to have JFrog CLI downloaded from your Artifactory instance:
Create a remote generic repository in Artifactory pointing to https://releases.jfrog.io/artifactory/
Add the JF_RELEASES_REPO variable to GitLab with the name of the repository you created
Configurations can be done via Project Settings > CI/CD > Variables:
JF_DOCKER_REGISTRY
JFROG_CLI_BUILD_PROJECT
JFrog project key to be used by commands which expect build name and build number. Determines the project of the published build.
JFROG_CLI_VERSION
Use a specific JFrog CLI version instead of the latest version. The minimal version allowed is: 2.17.0
See more environment variables in the JFrog CLI documentation.
For Linux / Mac: cURL
If the JF_DOCKER_REGISTRY
and JF_ACCESS_TOKEN
variables are set, then the JF_USER
variable is required.
Build info collection is unavailable when:
Working with a docker registry without JFrog CLI.
Running separate jobs on temporary agents or docker containers.
The setup-jfrog scripts are maintained in the jfrog-cli repository. Each yaml includes two hidden jobs with scripts named .setup_jfrog
and .cleanup_jfrog
, which can be referenced by the pipeline after the script is included.
.NET
Go
Gradle
Maven
npm
NuGet
Pip
Pipenv
Yarn Berry
.NET
Go
Gradle
Maven
npm
NuGet
Pip
Pipenv
Yarn Berry
Docker registry in Artifactory. For more info, see
ONLY ACTIVE JFROG CUSTOMERS ARE AUTHORIZED TO USE THE JFROG AI ASSISTANT. ALL OTHER USES ARE PROHIBITED.
This JFrog AI Assistant Addendum (this “Addendum”) forms part of the JFrog Subscription Agreement or other agreement made by and between the JFrog and Customer (the “Agreement”). Capitalized terms not otherwise defined in the body of this Addendum shall have the respective meanings assigned to them in the Agreement. Your use of the JFrog Platform, as applicable, shall continue to be governed by the Agreement.
THIS ADDENDUM TAKES EFFECT WHEN CUSTOMER (1) CLICKS THE “I ACCEPT” OR SIMILAR BUTTON AND/OR (2) BY ACCESSING OR USING THE APPLICABLE JFROG AI ASSISTANT SERVICE (respectively, the “AI ASSISTANT SERVICE” and “ADDENDUM EFFECTIVE DATE”). BY DOING SO, CUSTOMER: (A) ACKNOWLEDGES THAT IT HAS READ AND UNDERSTANDS THIS ADDENDUM; (B) REPRESENTS AND WARRANTS THAT IT HAS THE RIGHT, POWER, AND AUTHORITY TO ENTER INTO THIS ADDENDUM AND, IF ENTERING INTO THIS ADDENDUM FOR AN ENTITY, THAT IT HAS THE LEGAL AUTHORITY TO BIND SUCH ENTITY TO THIS ADDENDUM; AND (C) ACCEPTS THIS ADDENDUM AND AGREES THAT IT IS LEGALLY BOUND BY ITS TERMS.
IF CUSTOMER DOES NOT AGREE TO THIS ADDENDUM OR IF CUSTOMER IS A COMPETITOR OF JFROG OR ITS AFFILIATES (OR A PERSON OR ENTITY ACTING ON BEHALF OF A COMPETITOR), PLEASE SELECT THE “I DECLINE” OR SIMILAR BUTTON AND/OR DO NOT UNDER ANY CIRCUMSTANCES ACCESS OR USE THE AI ASSISTANT SERVICE.
a. AI Assistant Service. JFrog offers the applicable AI Assistant Service which references this Addendum, that is designed to enable Customer to: (i) generate or receive Output, in response to Input, for use in connection with the AI Assistant Service; and, if applicable to the specific AI Assistant Service, (ii) view suggested shortcuts and commands, in response to use of the AI Assistant Service by Customer, for use in connection with the AI Assistant Service (collectively, together with any Content, other than Output, provided to Customer by the AI Assistant Service, and any documentation for the AI Assistant Service, the “Service”). This Agreement only applies to the Service provided by JFrog and not to a Service provided by a third party.
b. Relationship with Agreement. In the event of any conflict between this Addendum and the Agreement, this Addendum will control, solely to the extent of the conflict. The Service is part of the “JFrog Platform” and the “JFrog Materials”, in each case, as used in the Agreement. “Customer”, as used herein, means the person or entity other than JFrog, that is party to the Agreement or an Order Form thereunder. “JFrog”, as used herein, means the applicable JFrog Contracting Entity in the Agreement. “Customer Data”, as used in the Agreement, excludes AI Assistant Data.
The license to the JFrog Platform set forth in the Agreement includes the right and license, during the Agreement Term, for Customer to access and use the Service. Without limiting the restrictions on use of the JFrog Platform set forth in the Agreement, Customer You will not, directly or indirectly, permit, facilitate, or otherwise allow any other person or entity to: (a) access or use the Service, except for Customer Users; (b) access the source code or other underlying components of the Service, including the model, model parameters, or model weights; (c) access, copy, extract, scrape, crawl, or pull from the Service, through manual or automated means, any information, data, materials, text, prompts, images, or other content (“Content”) that has been, is used, or may be used by JFrog, to train, retrain, tune, validate, modify, update, or otherwise improve the Service (“Training Content”); (d) develop, build, train, or run a machine learning or artificial intelligence application, functionality, logic, model, software system, or process on or using the Service; (e) intentionally generate Output that is sensitive, confidential, or proprietary information of any third party without authorization, or collect personal data from the Service; (f) share, generate or prompt any content or engage in behavior that is unlawful, harmful, threatening, obscene, violent, abusive, tortious, defamatory, ridicule, libelous, vulgar, lewd, invasive of another’s privacy, hateful, or otherwise objectionable; (g) upload or transmit any personal data (except for Customer User Information), viruses or other malicious content or code into or through the Service; or (h) access or use the Service in a manner that does not comply with the JFrog Acceptable Use Policy available at https://jfrog.com/acceptable-use-policy/.
This Addendum commences on the Addendum Effective Date and will remain in effect until the Agreement expires or is terminated, or this Addendum is terminated by JFrog in accordance with this Section, whichever is the earlier (the “Term”). JFrog may terminate or suspend this Addendum, or the availability of the Service, at any time and for any reason by providing Customer with notice, without liability or other obligation to Customer. Termination of this Addendum will not impact the Agreement. Upon any termination or expiration of this Addendum, Customer will promptly cease access and use of the Service.
a. License to AI Assistant Content. Customer hereby grants JFrog and its Affiliates a non-exclusive, sublicensable, transferable, royalty-free, fully paid-up, worldwide right and license, to use, reproduce, distribute, perform, display, modify, create derivative works of, process, store, and disclose any Content or other: (i) input provided to the Service provided by or on behalf of Customer, which may include Customer Data (“Input”); and (ii) output provided to, or generated for Customer by the Service, in response to use of the AI Assistant Service by Customer or an Input (“Output”), in each case of the foregoing (i) and (ii), for the purposes of billing, capacity planning, compliance, security, integrity, availability, stability, providing the AI Assistant Service as generally available, and, in the event the Customer elects to provide any suggestions, enhancement requests, recommendations, corrections or other feedback, improving the AI Assistant Service and the JFrog Platform. The foregoing grant includes the right and license for JFrog and its Affiliates to use the AI Assistant Content to train, retrain, tune, validate, modify, update, or otherwise improve the Service or the JFrog Platform. “Input” and “Output” are collectively hereinafter referred to as “AI Assistant Content”. The AI Assistant Content is not the “Confidential Information” of Customer. Personal Data shall not be entered as an Input to the Service.
b. Ownership of AI Assistant Content. As between Customer and JFrog, and to the extent permitted by applicable law, Customer: (i) retains ownership rights in Input; and (ii) owns the Output, except to the extent such Output was provided to, or generated for, other JFrog customers by the Service. Customer acknowledges that the Output provided may not be new or unique or protectable under applicable laws and that similar Outputs may be provided to other customers and their users in response to their Inputs into the Service.
c. Processing of AI Assistant Content. You authorize JFrog and its third-party providers to process your AI Assistant Content to provide the Service. You agree that JFrog may use Sub-Processors to provide the Service.
Customer represents, warrants, and covenants that Customer owns or otherwise has and will have the necessary rights, licenses, and consents in and relating to the AI Assistant Content such that, as used by JFrog and its Affiliates in accordance with this Addendum, such AI Assistant Content does not and will not infringe, misappropriate, or otherwise violate any intellectual property rights, or other rights, of any third party or violate any applicable law. CUSTOMER ACCEPTS AND AGREES THAT ANY USE OF OR RELIANCE ON OUTPUTS IS AT CUSTOMER’S SOLE RISK AND CUSTOMER WILL NOT RELY ON OUTPUT AS A SOLE SOURCE OF TRUTH OR FACTUAL INFORMATION, OR AS A SUBSTITUTE FOR PROFESSIONAL ADVICE. JFROG DOES NOT ACCEPT LIABILITY OR RESPONSIBILITY FOR ANY INCORRECT, OFFENSIVE, UNLAWFUL, HARMFUL, OR OTHERWISE OBJECTIONABLE OUTPUT. THE OUTPUT DOES NOT REFLECT THE VIEWS, OPINIONS, POLICIES, OR POSITION OF JFROG OR ITS AFFILIATES.
Without limiting the scope of the obligations to indemnify and defend under the Agreement, the claims, demands, suits, or proceedings (each, a “Claim”) for which Customer indemnifies and defend JFrog and its Affiliates under the Agreement include Claims arising out of or related to: (a) the Service or Customer’s access and use thereof; (b) any acts or omissions by Customer that constitute a breach of this Addendum; (c) reliance, or use of, any AI Assistant Content; and (d) fraud, gross negligence, or willful misconduct by Customer.
Any notice required or permitted by this Addendum may, if sent by JFrog, be delivered electronically, including through the Service or AI Assistant Service. The following terms will survive any termination or expiration of this Addendum: Section 4(a) (License to AI Assistant Content) and Section 5 (Representations; Warranties; Disclaimers) through Section 7 (Miscellaneous), inclusive.
This command is used to clean up files from a Git LFS repository. This deletes all files from a Git LFS repository, which are no longer referenced in a corresponding Git repository.
Command name
rt git-lfs-clean
Abbreviation
rt glc
Command options:
--refs
[Default: refs/remotes/*] List of Git references in the form of "ref1,ref2,..." which should be preserved.
--repo
[Optional] Local Git LFS repository in Artifactory which should be cleaned. If omitted, the repository is detected from the Git repository.
--quiet
[Default: false] Set to true to skip the delete confirmation message.
--dry-run
[Default: false] If true, cleanup is only simulated. No files are actually deleted.
Command arguments:
If no arguments are passed in, the command assumes the .git repository is located at current directory.
path to .git
Path to the directory which includes the .git directory.
Cleans up Git LFS files from Artifactory, using the configuration in the .git directory located at the current directory.
Cleans up Git LFS files from Artifactory, using the configuration in the .git directory located inside the path/to/git/config directory.
JFrog CLI is a compact and smart client that provides a simple interface that automates access to JFrog products simplifying your automation scripts and making them more readable and easier to maintain. JFrog CLI works with JFrog Artifactory, Xray, Distribution and Pipelines (through their respective REST APIs) making your scripts more efficient and reliable in several ways:
Advanced upload and download capabilities
JFrog CLI allows you to upload and download artifacts concurrently by a configurable number of threads that help your automated builds run faster. For big artifacts, you can define a number of chunks to split files for parallel download.
JFrog CLI optimizes both upload and download operations by skipping artifacts that already exist in their target location. Before uploading an artifact, JFrog CLI queries Artifactory with the artifact's checksum. If it already exists in Artifactory's storage, the CLI skips sending the file, and if necessary, Artifactory only updates its database to reflect the artifact upload. Similarly, when downloading an artifact from Artifactory, if the artifact already exists in the same download path, it will be skipped. With checksum optimization, long upload and download operations can be paused in the middle, and then be continued later where they were left off.
JFrog CLI supports uploading files to Artifactory using wildcard patterns, regular expressions, and ANT patterns, giving you an easy way to collect all the files you wish to upload. You can also download files using wildcard patterns.
Support for popular package managers and build tools
JFrog CLI offers comprehensive support for popular package managers and builds tools. It seamlessly integrates with package managers like npm, Maven, NuGet, Docker, and more, allowing you to easily manage and publish packages.
Source code and binaries scanning
JFrog CLI empowers you with robust scanning capabilities to ensure the security and compliance of your source code and software artifacts, including containers. It integrates with JFrog Xray and , enabling you to scan and analyze your projects and packages, including containers, for vulnerabilities, license compliance, and quality issues. With JFrog CLI, you can proactively identify and mitigate potential risks, ensuring the integrity and safety of your software supply chain.
Support for Build-Info
Build-Info is a comprehensive metadata Software Bill of Materials (SBOM) that captures detailed information about the components used in a build. It serves as a vital source of information, containing version history, artifacts, project modules, dependencies, and other crucial data collected during the build process. By storing this metadata in Artifactory, developers gain traceability and analysis capabilities to improve the quality and security of their builds. The Build-Info encompasses project module details, artifacts, dependencies, environment variables, and more. It is collected and outputted in a JSON format, facilitating easy access to information about the build and its components. JFrog CLI can create build-info and store the build-info in Artifactory.
JFrog CLI runs on any modern OS that fully supports the Go programming language.
We value your input in making the JFrog CLI documentation better. You can help us enhance and improve it by recommending changes and additions. To contribute, follow these steps:
Go to the documentation project on GitHub: https://github.com/jfrog/documentation and create a pull request with your proposed changes and additions.
Your contributions will be reviewed, and if accepted, they will be merged into the documentation to benefit the entire JFrog CLI community.
This article guides you through the process of creating and publishing your own JFrog CLI Plugin.
Make sure Go 1.17 or above is installed on your local machine and is included in your system PATH.
Make sure git is installed on your local machine and is included in your system PATH.
Press the Use this template button to create a new repository. You may name it as you like.
Clone your new repository to your local machine. For example:
Run the following commands, to build and run the template plugin.
Open the plugin code with your favorite IDE and start having fun.
Well, plugins can do almost anything. The sky is the limit.
You can also add other Go packages to your go.mod and use them in your code.
Code formatting. To make sure the code formatted properly, run the following go command on your plugin sources, while inside the root of your project directory. go fmt ./...
Plugin name. The plugin name should include only lower-case characters, numbers and dashes. The name length should not exceed 30 characters. It is recommended to use a short name for the users' convenience, but also make sure that its name hints on its functionality.
Consider create a tag for your plugin sources. Although this is not mandatory, we recommend creating a tag for your GitHub repository before publishing the plugin. You can then provide this tag to the Registry when publishing the plugin, to make sure the correct code is built.
Plugin version. Make sure that your built plugin has the correct version. The version is declared as part of the plugin sources. To check your plugin version, run the plugin executable with the -v option. For example: ./my-plugin -v
. The plugin version should have a v prefix. For example v1.0.0
and it should follow the semantic versioning guidelines.
Please make sure that the extension of your plugin descriptor file is yml and not yaml.
Please make sure your pull request includes only one or more plugin descriptors. Please do not add, edit or remove other files.
pluginName - The name of the plugin. This name should match the plugin name set in the plugin's code.
version - The version of the plugin. This version should have a v prefix and match the version set in the plugin's code.
repository - The plugin's code GitHub repository URL.
maintainers - The GitHub usernames of the plugin maintainers.
relativePath - If the plugin's go.mod file is not located at the root of the GitHub repository, set the relative path to this file. This path should not include the go.mod file.
branch - Optionally set an existing branch in your plugin's GitHub repository.
tag - Optionally set an existing tag in your plugin's GitHub repository.
To publish a new version of your plugin, all you need to do is create a pull request, which updates the version inside your plugin descriptor file. If needed, your change can also include either the branch or tag.
In addition to the public official JFrog CLI Plugins Registry, JFrog CLI supports publishing and installing plugins to and from private JFrog CLI Plugins Registries. A private registry can be hosted on any Artifactory server. It uses a local generic Artifactory repository for storing the plugins.
To create your own private plugins registry, follow these steps.
On your Artifactory server, create a local generic repository named jfrog-cli-plugins.
Make sure your Artifactory server is included in JFrog CLI's configuration, by running the jf c show
command.
If needed, configure your Artifactory instance using the jf c add
command.
Set the ID of the configured server as the value of the JFROG_CLI_PLUGINS_SERVER environment variable.
If you wish the name of the plugins repository to be different from jfrog-cli-plugins, set this name as the value of the JFROG_CLI_PLUGINS_REPO environment variable.
The jf plugin install
command will now install plugins stored in your private registry.
To publish a plugin to the private registry, run the following command, while inside the root of the plugin's sources directory. This command will build the sources of the plugin for all the supported operating systems. All binaries will be uploaded to the configured registry.
jf plugin publish the-plugin-name the-plugin-version
When installing a plugin using the jf plugin install
command, the plugin is downloaded into its own directory under the plugins
directory, which is located under the JFrog CLI home directory. By default, you can find the plugins
directory under ~/.jfrog/plugins/
. So if for example you are developing a plugin named my-plugin
, and you'd like to test it with JFrog CLI before publishing it, you'll need to place your plugin's executable, named my-plugin
, under the following path -
Once the plugin's executable is there, you'll be able to see it is installed by just running jf
.
In some cases your plugin may need to use external resources. For example, the plugin code may need to run an executable or read from a configuration file. You would therefore want these resources to be packaged together with the plugin, so that when it is installed, these resources are also downloaded and become available for the plugin.
The way to include resources for your plugin, is to simply place them inside a directory named resources
at the root of the plugin's sources directory. You can create any directory structure inside resources
. When publishing the plugin, the content of the resources
directory is published alongside the plugin executable. When installing the plugin, the resources are also downloaded.
When installing a plugin, the plugin's resources are downloaded the following directory under the JFrog CLI home -
This means that during development, you'll need to make sure the resources are placed there, so that your plugin code can access them. Here's how your plugin code can access the resources directory -
allow enhancing the functionality of JFrog CLI to meet the specific user and organization needs. The source code of a plugin is maintained as an open source Go project on GitHub. All public plugins are registered in . We encourage you, as developers, to create plugins and share them publicly with the rest of the community. When a plugin is included in the registry, it becomes publicly available and can be installed using JFrog CLI. Version 1.41.1 or above is required. Plugins can be installed using the following JFrog CLI command:
Go to .
You have access to most of the JFrog CLI code base. This is because your plugin code depends on the module. It is a dependency declared in your project's go.mod file. Feel free to explore the jfrog-cli-core code base, and use it as part of your plugin.
You can package any external resources, such as executables or configuration files, and have them published alongside your plugin. Read more about this
To make a new plugin available for anyone to use, you need to register the plugin in the JFrog CLI Plugins Registry. The registry is hosted in . The registry includes a descriptor file in YAML format for each registered plugin, inside the plugins directory. To include your plugin in the registry, create a pull request to add the plugin descriptor file for your plugin according to this file name format: your-plugin-name.yml.
To publish your plugin, you need to include it in . Please make sure your plugin meets the following guidelines before publishing it.
Read the document. You'll be asked to accept it before your plugin becomes available.
Code structure. Make sure the plugin code is structured similarly to the . Specifically, it should include a commands package, and a separate file for each command.
Tests. The plugin code should include a series of thorough tests. Use the as a reference on how the tests should be included as part of the source code. The tests should be executed using the following Go command while inside the root directory of the plugin project. Note: The Registry verifies the plugin and tries to run your plugin tests using the following command. go vet -v ./... && go test -v ./...
Create a Readme. Make sure that your plugin code includes a README.md file and place it in the root of the repository. The README needs to be structured according to the README. It needs to include all the information and relevant details for the relevant plugin users.
If your plugin also uses , you should place the resources under the following path -
JFrog supports the following Package Managers for JetBrains IDEs:
Go | Maven | Gradle | npm | Yarn v1 | Yarn v2 | Pip | Pipenv | Poetry |
Additional SCA capabilities supported:
License Violations
Autofix for direct dep
JFrog supports Contextual Analysis, Secrets, Infrastructure as Code (IaC), and SAST for JetBrains IDEs. Follow the links to learn more about each feature and its supported technologies and languages.
Install Frogbot on GitHub using GitHub Actions
Perform the following steps to allow GitHub and Frogbot to work together:
Clone the GitHub repository you wish to scan to your local environment:
Switch to the branch you'd like to scan with Frogbot:
In the branch you'd like to scan, create a file named frogbot-scan-repository.yml
. Fill it with the provided template and push it into the .github/workflows
directory at the root of your GitHub repository.
You can see more advanced options in the full scan repository template.
Create a file named frogbot-scan-pull-request.yml
. Fill it with the provided template, and then push it into the .github/workflows
directory at the root of your GitHub repository.
You can see more advanced options in the full scan pull request template.
The frogbot-config.yml file encompasses project-related configurations used by Frogbot's scanning. This includes details about the repository's directory structure and may additionally encompass package manager commands necessary for Frogbot to list the project's dependencies.
No, the file isn't mandatory. In most cases, Frogbot can understand the structure of the projects in the repository and list the project's depedencies without the file.
If your project doesn't use a frogbot-config.yml file, all the configuration Frogbot requires should be provided as variables as part of the Frogbot workflows.
Frogbot relies on the project's descriptor files, such as package.json and pom.xml, to identify the project's dependencies. It scans the repository for these descriptor files and utilizes the appropriate package manager, such as npm or Maven, to compile a list of dependencies for the project. If you desire manual control over the project structure or the package manager commands, you can achieve this by creating a frogbot-config.yml file. In the provided example, we outline two subprojects located at path/to/project-1 and path/to/project-2 for Frogbot to include in its scanning process.
Here's another example. Notice that we specify a custom install
command here.
You have the option of using a single frogbot-config.yml file for scanning multiple Git repositories in the same organization if one of the following platforms is used.
GitHub with Jenkins or JFrog Pipelines
Bitbucket Server
Azure Repos
The file can be placed in any repository if it's in the same organization as all the repositories referenced in the file. Here's an example of a frogbot-config.yml referencing multiple repositories.
If however you're using one of the following platforms, each repository that needs to be scanned by Frogbot should include its own frogbot-config.yml file.
GitHub with GitHub actions
GitLab
Frogbot expects the frogbot-config.yml file to be in the following path from the root of the Git repository: .frogbot/frogbot-config.yml
.
IMPORTANT: The frogbot-config.yml
file must be pushed to the target branch before it can be used by Frogbot. This means that if, for example, a pull request includes the frogbot-config.yml
and the target branch doesn't, the file will be ignored.
This section describes how to use the JFrog Eclipse IDE Plugin, it reviews the following:
Open JFrog tab
Scan and View Results using the JFrog Eclipse IDE Plugin
Filtering Xray Scanned Results using the JFrog Eclipse IDE Plugin
To open the plugin tab, click Window | Show View | Other | Security | JFrog.
JFrog Xray automatically performs a scan when the plugin first loaded on startup. To manually invoke a scan:
Click Refresh in the JFrog plugin.
View the scanned results in the plugin.
The JFrog plugin provides the following filter to narrow down the scanned results to view exactly what you need:
Severity: Displays issues according to specific severity.
License: Displays components according to specific licenses.
Here you can find the full template for Frogbot:
See the complete content and structure of the frogbot-config.yml file .
Important Notice: For Scanning Pull Requests, it is advisable to refrain from setting up Frogbot using Jenkins for open source projects. For further details, please refer to the .
Frogbot adds the scan results to the pull request in the following format:
If no new vulnerabilities are found, Frogbot automatically adds the following comment to the pull request:
Software Composition Analysis (SCA)
If new vulnerabilities are found, Frogbot adds them as a comment on the pull request. For example:
VULNERABLE DEPENDENCIES
Not Applicable
minimist:1.2.5
minimist:1.2.5
[0.2.4] [1.2.6]
Applicable
protobufjs:6.11.2
protobufjs:6.11.2
[6.11.3]
Not Applicable
lodash:4.17.19
lodash:4.17.19
[4.17.21]
Vulnerability Contextual Analysis
Static Application Security Testing (SAST)
Infrastructure as Code scans (IaC)
Validate Allowed Licenses
When Frogbot scans newly opened pull requests, it checks the licenses of any new direct project dependencies introduced by the pull request. If Frogbot identifies licenses that are not listed in a predefined set of approved licenses, it appends a comment to the pull request providing this information. The list of allowed licenses is set up as a variable within the Frogbot workflow.
When Frogbot detects secrets that have been inadvertently exposed within the code of a pull request, it promptly triggers an email notification to the user who pushed the corresponding commit. The email address utilized for this notification is sourced from the committer's Git profile configuration. Moreover, Frogbot offers the flexibility to direct the email notification to an extra email address if desired. To activate email notifications, it is necessary to configure your SMTP server details as variables within your Frogbot workflows.
xx
Critical
High
High