Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
JFrog CLI is a compact and smart client that provides a simple interface to automate access to JFrog products, simplifying your automation scripts and making them more readable and easier to maintain.
JFrog CLI works with JFrog Artifactory, Xray, and Distribution (through their respective REST APIs), making your scripts more efficient and reliable in several ways.
Advanced upload and download capabilities
JFrog CLI allows you to upload and download artifacts concurrently by a configurable number of threads that help your automated builds run faster. For big artifacts, you can define a number of chunks to split files for parallel download.
To optimize both uploads and downloads, JFrog CLI avoids transferring artifacts that already exist in the target location. Before uploading, the CLI checks the artifact's checksum with Artifactory. If the artifact is already present in Artifactory’s storage, the CLI skips the upload, and Artifactory may just update its database to reflect the new upload. Similarly, when downloading an artifact, if it already exists in the specified download path, it will be skipped. This checksum optimization also allows you to pause long upload and download operations and resume them later from where you left off.
JFrog CLI simplifies file uploads by supporting wildcard patterns, regular expressions, and ANT patterns, allowing you to easily select all the files you want to upload. You can also use wildcard patterns for downloading files.
Support for popular package managers and build tools
JFrog CLI offers comprehensive support for popular package managers and build tools. It seamlessly integrates with package managers like npm, Maven, NuGet, Docker, and more, allowing you to easily manage and publish packages.
Source code and binaries scanning
JFrog CLI empowers you with robust scanning capabilities to ensure the security and compliance of your source code and software artifacts, including containers. It integrates with JFrog Xray, enabling you to scan and analyze your projects and packages, including containers, for vulnerabilities, license compliance, and quality issues. With JFrog CLI, you can proactively identify and mitigate potential risks, ensuring the integrity and safety of your software supply chain.
Support for Build-Info
Build-Info is a comprehensive metadata Software Bill of Materials (SBOM) that captures detailed information about the components used in a build. It serves as a vital source of information, containing version history, artifacts, project modules, dependencies, and other crucial data collected during the build process. By storing this metadata in Artifactory, developers gain traceability and analysis capabilities to improve the quality and security of their builds. The Build-Info encompasses project module details, artifacts, dependencies, environment variables, and more. It is collected and outputted in a JSON format, facilitating easy access to information about the build and its components. JFrog CLI can create a Build-Info and store the Build-Info in Artifactory.
JFrog CLI runs on any modern OS that fully supports the Go programming language.
Your input is valuable in making the JFrog CLI documentation better. You can help enhance and improve it by recommending changes and additions. To contribute, follow these steps:
Go to the documentation project on GitHub:
GitHub - jfrog/documentation and create a pull request with your proposed changes and additions.
Your contributions will be reviewed, and if accepted, they will be merged into the documentation to benefit the entire JFrog CLI community.
JFrog CLI supports using an HTTP/S proxy. All you need to do is set HTTP_PROXY or HTTPS_PROXY environment variable with the proxy URL.
HTTP_PROXY, HTTPS_PROXY and NO_PROXY are the industry standards for proxy usages.
JFrog CLI is a compact and smart client that provides a simple interface that automates access to JFrog products simplifying your automation scripts and making them more readable and easier to maintain. JFrog CLI works with JFrog Artifactory, making your scripts more efficient and reliable in several ways:
Advanced upload and download capabilities
JFrog CLI allows you to upload and download artifacts concurrently by a configurable number of threads that help your automated builds run faster. For big artifacts, you can define a number of chunks to split files for parallel download.
JFrog CLI optimizes both upload and download operations by skipping artifacts that already exist in their target location. Before uploading an artifact, JFrog CLI queries Artifactory with the artifact's checksum. If it already exists in Artifactory's storage, the CLI skips sending the file, and if necessary, Artifactory only updates its database to reflect the artifact upload. Similarly, when downloading an artifact from Artifactory, if the artifact already exists in the same download path, it will be skipped. With checksum optimization, long upload and download operations can be paused in the middle, and then be continued later where they were left off.
JFrog CLI supports uploading files to Artifactory using wildcard patterns, regular expressions, and ANT patterns, giving you an easy way to collect all the files you wish to upload. You can also download files using wildcard patterns.
Support for popular package managers and build tools
JFrog CLI offers comprehensive support for popular package managers and builds tools. It seamlessly integrates with package managers like npm, Maven, NuGet, Docker, and more, allowing you to easily manage and publish packages.
Support for Build-Info
Build-Info is a comprehensive metadata Software Bill of Materials (SBOM) that captures detailed information about the components used in a build. It serves as a vital source of information, containing version history, artifacts, project modules, dependencies, and other crucial data collected during the build process. By storing this metadata in Artifactory, developers gain traceability and analysis capabilities to improve the quality and security of their builds. The Build-Info encompasses project module details, artifacts, dependencies, environment variables, and more. It is collected and outputted in a JSON format, facilitating easy access to information about the build and its components. JFrog CLI can create build-info and store the build-info in Artifactory.
Read more about JFrog CLI .
HTTP_PROXY
Determines a URL to an HTTP proxy.
HTTPS_PROXY
Determines a URL to an HTTPS proxy.
NO_PROXY
Use this variable to bypass the proxy to IP addresses, subnets or domains. This may contain a comma-separated(,) list of hostnames or IPs without protocols and ports in standard Go NO_PROXY syntax (go here for syntax details). A typical usage may be to set this variable to Artifactory’s IP address.
Some of the Artifactory commands make use of the following environment variable:
Variable Name
Description
JFROG_CLI_MIN_CHECKSUM_DEPLOY_SIZE_KB
[Default: 10] Minimum file size in KB for which JFrog CLI performs checksum deploy optimization.
JFROG_CLI_RELEASES_REPO
Configured Artifactory repository name to download the jar needed by the mvn/gradle command. This environment variable's value format should be server ID configured by the 'jf c add' command. The repository should proxy https://releases.jfrog.io. This environment variable is used by the 'jf mvn' and 'jf gradle' commands, and also by the 'jf audit' command, when used for maven or gradle projects.
JFROG_CLI_DEPENDENCIES_DIR
[Default: $JFROG_CLI_HOME_DIR/dependencies] Defines the directory to which JFrog CLI's internal dependencies are downloaded.
JFROG_CLI_REPORT_USAGE
[Default: true] Set to false to block JFrog CLI from sending usage statistics to Artifactory.
JFROG_CLI_SERVER_ID
Server ID configured using the 'jf config' command, unless sent as a command argument or option.
JFROG_CLI_BUILD_NAME
Build name to be used by commands which expect a build name, unless sent as a command argument or option.
JFROG_CLI_BUILD_NUMBER
Build number to be used by commands which expect a build number, unless sent as a command argument or option.
JFROG_CLI_BUILD_PROJECT
JFrog project key to be used by commands that expect build name and build number. Determines the project of the published build.
JFROG_CLI_BUILD_URL
Sets the CI server build URL in the build-info. The "jf rt build-publish" command uses the value of this environment variable unless the --build-url command option is sent.
JFROG_CLI_ENV_EXCLUDE
[Default: password;secret;key;token] List of semicolon-separated(;) case insensitive patterns in the form of "value1;value2;...". Environment variables match those patterns will be excluded. This environment variable is used by the "jf rt build-publish" command, in case the --env-exclude command option is not sent.
JFROG_CLI_TRANSITIVE_DOWNLOAD
[Default: false] Set this option to true to include remote repositories in artifact searches when using the 'rt download' command. The search will target the first five remote repositories within the virtual repository. This feature is available starting from Artifactory version 7.17.0. NOTE: Enabling this option may increase the load on Artifactory instances that are proxied by multiple remote repositories..
JFROG_CLI_UPLOAD_EMPTY_ARCHIVE
[Default: false] Used by the "jf rt upload" command. Set to true if you'd like to upload an empty archive when '--archive' is set but all files were excluded by exclusions pattern.
Note
Read about additional environment variables at the Welcome to JFrog CLI page.
If you're using JFrog CLI from a bash, zsh, or fish shell, you can install JFrog CLI's auto-completion scripts to improve your command-line experience. Auto-completion helps save time and reduces errors by suggesting potential command options and arguments as you type.
Auto-completion allows you to:
Increase Efficiency: Quickly fill in commands and arguments without typing them out fully.
Reduce Errors: Minimize typographical errors in commands and options.
Discover Commands: Easily explore options for specific commands with in-line suggestions.
JFrog CLI is a command-line interface for interacting with JFrog Artifactory and other JFrog products. It simplifies various functions, such as uploading or downloading files, managing repositories, and more. For more information, refer to Jfrog CLI.
The method of enabling auto-completion varies based on the shell you are using (bash, zsh, or fish).
If you're installing JFrog CLI using Homebrew, the bash, zsh, or fish auto-complete scripts are automatically installed. However, you need to ensure that your .bash_profile
or .zshrc
files are correctly configured. Refer to the Homebrew Shell Completion documentation for specific instructions.
If you are using the Oh My Zsh framework, follow these steps to enable JFrog CLI auto-completion:
Open your zsh configuration file, located at $HOME/.zshrc
, with any text editor."
your-text-editor $HOME/.zshrc
Locate the line starting with plugins=
.
Add jfrog
to the list of plugins. For example:
plugins=(git mvn npm sdk jfrog)
Save and close the file.
Finally, apply the changes by running:
source $HOME/.zshrc
If you're not using Homebrew or Oh My Zsh, you can manually install the auto-completion scripts for your specific shell:
For Bash
To install auto-completion for bash, run the following command:
jf completion bash --install
Follow the on-screen instructions to complete the installation.
For Zsh
To install auto-completion for zsh, run the following command:
jf completion zsh --install
Again, follow the instructions provided during the installation process.
For Fish
To install auto-completion for fish, run the following command:
jf completion fish --install
Ensure you follow the relevant instructions to finalize the setup.
After installing the completion scripts, you can verify that auto-completion works by typing jf
followed by pressing the Tab
key. You should see a list of available commands and options.
ONLY ACTIVE JFROG CUSTOMERS ARE AUTHORIZED TO USE THE JFROG AI ASSISTANT. ALL OTHER USES ARE PROHIBITED.
This JFrog AI Assistant Addendum (this “Addendum”) forms part of the JFrog Subscription Agreement or other agreement made by and between the JFrog and Customer (the “Agreement”). Capitalized terms not otherwise defined in the body of this Addendum shall have the respective meanings assigned to them in the Agreement. Your use of the JFrog Platform, as applicable, shall continue to be governed by the Agreement.
THIS ADDENDUM TAKES EFFECT WHEN CUSTOMER (1) CLICKS THE “I ACCEPT” OR SIMILAR BUTTON AND/OR (2) BY ACCESSING OR USING THE APPLICABLE JFROG AI ASSISTANT SERVICE (respectively, the “AI ASSISTANT SERVICE” and “ADDENDUM EFFECTIVE DATE”). BY DOING SO, CUSTOMER: (A) ACKNOWLEDGES THAT IT HAS READ AND UNDERSTANDS THIS ADDENDUM; (B) REPRESENTS AND WARRANTS THAT IT HAS THE RIGHT, POWER, AND AUTHORITY TO ENTER INTO THIS ADDENDUM AND, IF ENTERING INTO THIS ADDENDUM FOR AN ENTITY, THAT IT HAS THE LEGAL AUTHORITY TO BIND SUCH ENTITY TO THIS ADDENDUM; AND (C) ACCEPTS THIS ADDENDUM AND AGREES THAT IT IS LEGALLY BOUND BY ITS TERMS.
IF CUSTOMER DOES NOT AGREE TO THIS ADDENDUM OR IF CUSTOMER IS A COMPETITOR OF JFROG OR ITS AFFILIATES (OR A PERSON OR ENTITY ACTING ON BEHALF OF A COMPETITOR), PLEASE SELECT THE “I DECLINE” OR SIMILAR BUTTON AND/OR DO NOT UNDER ANY CIRCUMSTANCES ACCESS OR USE THE AI ASSISTANT SERVICE.
a. AI Assistant Service. JFrog offers the applicable AI Assistant Service which references this Addendum, that is designed to enable Customer to: (i) generate or receive Output, in response to Input, for use in connection with the AI Assistant Service; and, if applicable to the specific AI Assistant Service, (ii) view suggested shortcuts and commands, in response to use of the AI Assistant Service by Customer, for use in connection with the AI Assistant Service (collectively, together with any Content, other than Output, provided to Customer by the AI Assistant Service, and any documentation for the AI Assistant Service, the “Service”). This Agreement only applies to the Service provided by JFrog and not to a Service provided by a third party.
b. Relationship with Agreement. In the event of any conflict between this Addendum and the Agreement, this Addendum will control, solely to the extent of the conflict. The Service is part of the “JFrog Platform” and the “JFrog Materials”, in each case, as used in the Agreement. “Customer”, as used herein, means the person or entity other than JFrog, that is party to the Agreement or an Order Form thereunder. “JFrog”, as used herein, means the applicable JFrog Contracting Entity in the Agreement. “Customer Data”, as used in the Agreement, excludes AI Assistant Data.
The license to the JFrog Platform set forth in the Agreement includes the right and license, during the Agreement Term, for Customer to access and use the Service. Without limiting the restrictions on use of the JFrog Platform set forth in the Agreement, Customer You will not, directly or indirectly, permit, facilitate, or otherwise allow any other person or entity to: (a) access or use the Service, except for Customer Users; (b) access the source code or other underlying components of the Service, including the model, model parameters, or model weights; (c) access, copy, extract, scrape, crawl, or pull from the Service, through manual or automated means, any information, data, materials, text, prompts, images, or other content (“Content”) that has been, is used, or may be used by JFrog, to train, retrain, tune, validate, modify, update, or otherwise improve the Service (“Training Content”); (d) develop, build, train, or run a machine learning or artificial intelligence application, functionality, logic, model, software system, or process on or using the Service; (e) intentionally generate Output that is sensitive, confidential, or proprietary information of any third party without authorization, or collect personal data from the Service; (f) share, generate or prompt any content or engage in behavior that is unlawful, harmful, threatening, obscene, violent, abusive, tortious, defamatory, ridicule, libelous, vulgar, lewd, invasive of another’s privacy, hateful, or otherwise objectionable; (g) upload or transmit any personal data (except for Customer User Information), viruses or other malicious content or code into or through the Service; or (h) access or use the Service in a manner that does not comply with the JFrog Acceptable Use Policy available at https://jfrog.com/acceptable-use-policy/.
This Addendum commences on the Addendum Effective Date and will remain in effect until the Agreement expires or is terminated, or this Addendum is terminated by JFrog in accordance with this Section, whichever is the earlier (the “Term”). JFrog may terminate or suspend this Addendum, or the availability of the Service, at any time and for any reason by providing Customer with notice, without liability or other obligation to Customer. Termination of this Addendum will not impact the Agreement. Upon any termination or expiration of this Addendum, Customer will promptly cease access and use of the Service.
a. License to AI Assistant Content. Customer hereby grants JFrog and its Affiliates a non-exclusive, sublicensable, transferable, royalty-free, fully paid-up, worldwide right and license, to use, reproduce, distribute, perform, display, modify, create derivative works of, process, store, and disclose any Content or other: (i) input provided to the Service provided by or on behalf of Customer, which may include Customer Data (“Input”); and (ii) output provided to, or generated for Customer by the Service, in response to use of the AI Assistant Service by Customer or an Input (“Output”), in each case of the foregoing (i) and (ii), for the purposes of billing, capacity planning, compliance, security, integrity, availability, stability, providing the AI Assistant Service as generally available, and, in the event the Customer elects to provide any suggestions, enhancement requests, recommendations, corrections or other feedback, improving the AI Assistant Service and the JFrog Platform. The foregoing grant includes the right and license for JFrog and its Affiliates to use the AI Assistant Content to train, retrain, tune, validate, modify, update, or otherwise improve the Service or the JFrog Platform. “Input” and “Output” are collectively hereinafter referred to as “AI Assistant Content”. The AI Assistant Content is not the “Confidential Information” of Customer. Personal Data shall not be entered as an Input to the Service.
b. Ownership of AI Assistant Content. As between Customer and JFrog, and to the extent permitted by applicable law, Customer: (i) retains ownership rights in Input; and (ii) owns the Output, except to the extent such Output was provided to, or generated for, other JFrog customers by the Service. Customer acknowledges that the Output provided may not be new or unique or protectable under applicable laws and that similar Outputs may be provided to other customers and their users in response to their Inputs into the Service.
c. Processing of AI Assistant Content. You authorize JFrog and its third-party providers to process your AI Assistant Content to provide the Service. You agree that JFrog may use Sub-Processors to provide the Service.
Customer represents, warrants, and covenants that Customer owns or otherwise has and will have the necessary rights, licenses, and consents in and relating to the AI Assistant Content such that, as used by JFrog and its Affiliates in accordance with this Addendum, such AI Assistant Content does not and will not infringe, misappropriate, or otherwise violate any intellectual property rights, or other rights, of any third party or violate any applicable law. CUSTOMER ACCEPTS AND AGREES THAT ANY USE OF OR RELIANCE ON OUTPUTS IS AT CUSTOMER’S SOLE RISK AND CUSTOMER WILL NOT RELY ON OUTPUT AS A SOLE SOURCE OF TRUTH OR FACTUAL INFORMATION, OR AS A SUBSTITUTE FOR PROFESSIONAL ADVICE. JFROG DOES NOT ACCEPT LIABILITY OR RESPONSIBILITY FOR ANY INCORRECT, OFFENSIVE, UNLAWFUL, HARMFUL, OR OTHERWISE OBJECTIONABLE OUTPUT. THE OUTPUT DOES NOT REFLECT THE VIEWS, OPINIONS, POLICIES, OR POSITION OF JFROG OR ITS AFFILIATES.
Without limiting the scope of the obligations to indemnify and defend under the Agreement, the claims, demands, suits, or proceedings (each, a “Claim”) for which Customer indemnifies and defend JFrog and its Affiliates under the Agreement include Claims arising out of or related to: (a) the Service or Customer’s access and use thereof; (b) any acts or omissions by Customer that constitute a breach of this Addendum; (c) reliance, or use of, any AI Assistant Content; and (d) fraud, gross negligence, or willful misconduct by Customer.
Any notice required or permitted by this Addendum may, if sent by JFrog, be delivered electronically, including through the Service or AI Assistant Service. The following terms will survive any termination or expiration of this Addendum: Section 4(a) (License to AI Assistant Content) and Section 5 (Representations; Warranties; Disclaimers) through Section 7 (Miscellaneous), inclusive.
JFrog CLI is a command-line tool that enhances the automation and management of JFrog services, including Artifactory, Xray, and other components within the JFrog ecosystem. Authentication is a vital component of using JFrog CLI, ensuring secure interactions with the JFrog services.
When working with JFrog Xray, you have two primary authentication options: username/password pairs and access tokens. Each method allows you to secure access to your JFrog instance and interact with the API effectively.
Before proceeding with authentication using JFrog CLI, ensure that you meet the following prerequisites:
JFrog CLI Installed: Make sure that you have the JFrog CLI installed on your system. You can download and install it from the.
JFrog Account: An active JFrog account with appropriate permissions to access the service. Ensure you have the necessary login credentials (username and password) or an access token.
Token Validity (if using access tokens): If you choose to authenticate with an access token, ensure that it is in a valid JWT format and has not expired. Review your token’s scope and permissions to confirm it grants the required access to Xray.
When using JFrog CLI with Xray, authentication is mandatory. JFrog CLI does not support access to Xray without valid authentication credentials. You can authenticate using either a username and password or an access token. Below are detailed instructions for both methods.
Using password & Username
Access token
To authenticate using your Xray login credentials:
Configuration Options
You can configure your credentials permanently using the jf c add
command. Alternatively, you can provide your credentials dynamically for each command.
Configure Once Using jf c add
Run the following command:
Follow the prompts to enter the necessary information:
jf c
Enter a unique server identifier: Your chosen name for this configuration (e.g., xray_server).
JFrog Platform URL: The base URL for your JFrog instance (e.g., <https://yourjfroginstance.jfrog.io).>
JFrog username: Your username.
JFrog password: Your password.
Using Command Options
For each command, you can specify the following options:
Example Command
To authenticate using an Xray Access Token:
Configuration Options
Similar to username/password authentication, you can configure your access token using the jf c add command, or you can include it directly with each command.
Configure Once Using jf c add
When prompted, enter your access token instead of a password.
Using Command Options
You can specify the following options for authentication:
Example Command
Note
Security: Ensure that your credentials and access tokens are kept secure and not hardcoded in scripts wherever possible. Consider using environment variables or secure vaults for sensitive information.
Token Expiration: Access tokens may have an expiration time. Be aware of this and renew your token as needed to maintain access.
The JFrog CLI offers enormous flexibility in how you download, upload, copy, or move files through the use of wildcard or regular expressions with placeholders.
Any wildcard enclosed in parentheses in the source path can be matched with a corresponding placeholder in the target path to determine the name of the artifact once uploaded.
For each .tgz file in the source directory, create a corresponding directory with the same name in the target repository and upload it there. For example, a file named froggy.tgz should be uploaded to my-local-rep/froggy. froggy will be created in a folder in Artifactory).
Upload all files whose name begins with "frog" to folder frogfiles in the target repository, but append its name with the text "-up". For example, a file called froggy.tgz should be renamed froggy.tgz-up.
Upload all files in the current directory to the my-local-repo repository and place them in directories that match their file extensions.
Copy all zip files under /rabbit in the source-frog-repo repository into the same path in the target-frog-repo repository and append the copied files' names with ".cp".
This command is used to clean up files from a Git LFS repository. This deletes all files from a Git LFS repository, which are no longer referenced in a corresponding Git repository.
Cleans up Git LFS files from Artifactory, using the configuration in the .git directory located at the current directory.
Cleans up Git LFS files from Artifactory, using the configuration in the .git directory located inside the path/to/git/config directory.
The JFrog Security documentation has a new home! You can now find it , including sections on:
The JFrog Security documentation has a new home! You can now find it , including sections on:
The JFrog Security documentation has a new home! You can now find it , including sections on:
The JFrog Security documentation has a new home! You can now find it , including sections on:
Install Frogbot on GitHub using GitHub Actions
The JFrog Security documentation has a new home! You can now find it , including sections on:
The JFrog Security documentation has a new home! You can now find it , including sections on:
The JFrog Security documentation has a new home! You can now find it , including sections on:
The JFrog Security documentation has a new home! You can now find it , including sections on:
The JFrog Security documentation has a new home! You can now find it , including sections on:
The JFrog Security documentation has a new home! You can now find it , including sections on:
jf rt u "(*).tgz" my-local-repo/{1}/ --recursive=false
jf rt u "(frog*)" my-local-repo/frogfiles/{1}-up --recursive=false
jf rt u "(*).(*)" my-local-repo/{2}/{1}.{2} --recursive=false
jf rt cp "source-frog-repo/rabbit/(*.zip)" target-frog-repo/rabbit/{1}.cp
Command name
rt git-lfs-clean
Abbreviation
rt glc
Command options:
--refs
[Default: refs/remotes/*] List of Git references in the form of "ref1,ref2,..." which should be preserved.
--repo
[Optional] Local Git LFS repository in Artifactory which should be cleaned. If omitted, the repository is detected from the Git repository.
--quiet
[Default: false] Set to true to skip the delete confirmation message.
--dry-run
[Default: false] If true, cleanup is only simulated. No files are actually deleted.
Command arguments:
If no arguments are passed in, the command assumes the .git repository is located at current directory.
path to .git
Path to the directory which includes the .git directory.
jf rt glc
jf rt glc path/to/git/config
Command Option
Description
--url
JFrog Xray API endpoint URL. Typically ends with /xray.
--user
Your JFrog username.
--password
Your JFrog password.
jf rt
ping
--url
"<https://yourjfroginstance.jfrog.io/xray>"
--user
"your_username"
--password
"your_password"
Command Option
Description
--url
JFrog Xray API endpoint URL. Typically ends with /xray.
--access-token
Your JFrog access token. Ensure it is a valid JWT format token.
jf rt
ping
--url
"<https://yourjfroginstance.jfrog.io/xray>"
--access-token
"your_access_token"
JFrog CLI lets you upload and download artifacts from your local file system to Artifactory, this also includes uploading symlinks (soft links).
Symlinks are stored in Artifactory as files with a zero size, with the following properties: symlink.dest - The actual path on the original filesystem to which the symlink points symlink.destsha1 - the SHA1 checksum of the value in the symlink.dest property
To upload symlinks, the jf rt upload
command should be executed with the --symlinks
option set to true.
When downloading symlinks stored in Artifactory, the CLI can verify that the file to which the symlink points actually exists and that it has the correct SHA1 checksum. To add this validation, you should use the --validate-symlinks
option with the jf rt download
command.
ONLY ACTIVE JFROG CUSTOMERS ARE AUTHORIZED TO USE THE JFROG AI ASSISTANT. ALL OTHER USES ARE PROHIBITED.
The JFrog CLI AI Command Assistant streamlines your workflow by turning natural language inputs into JFrog CLI commands.
Simply describe your desired actions, and the assistant generates commands with all necessary parameters, whether you're uploading artifacts, managing repositories, scanning your code, or performing other actions using the JFrog CLI.
Each query is treated individually, and while the interface allows you to refine requests, it doesn’t maintain a chat history.
This tool helps users access the full power of JFrog CLI without needing to remember specific syntax, ensuring efficiency and accuracy.
To use the JFrog CLI AI Command Assistant, follow these simple steps:
Ensure that you are in a terminal session where JFrog CLI is installed and configured.
This feature is available starting from CLI version 2.69 and above. To validate your version, run:
jf --version
Type the following command to initiate the AI assistant:
jf how
After entering the command, you will see a prompt:
Your request:
Describe in natural language what you would like the JFrog CLI to do. The AI assistant will generate the exact CLI command needed.
For example, you might type:
Your request: How to upload all files in the 'build' directory to the 'my-repo' repository?
The AI assistant will process your request and output the corresponding JFrog CLI command, including all necessary parameters. For the example above, it will generate:
jf rt u build/ my-repo/
You can now copy the generated command and run it in your terminal.
If needed, you can refine your request and try again.
This command is used to initialize a new JFrog worker.
This command will generate the following files:
manifest.json
– The worker specification which includes its name, the code location, the secrets and all the data that is useful to the worker.
package.json
– The file describing the development dependencies of the worker, this will not be used when executing your worker in our runtime.
worker.ts
– The worker code source, here it will be our sample code for the event.
worker.spec.ts
- The source code of the worker unit tests.
tsconfig.json
- The Typescript configuration file.
types.ts
- A file containing the event's specific types that can be used in the worker code.
Initialize a new BEFORE_DOWNLOAD worker named my-worker
.
The JFrog Security documentation has a new home! You can now find it , including sections on:
Test-run a worker. The worker needs to be initialized before running this command. The command will execute the worker with its local content, so it can be used to test the worker execution before pushing the local changes to the server.
Test-run a worker initialized in the current directory, with a payload located in a file named payload.json
from the same directory.
The JFrog Security documentation has a new home! You can now find it , including sections on:
The JFrog Security documentation has a new home! You can now find it , including sections on:
The JFrog Security documentation has a new home! You can now find it , including sections on:
This command is used to edit a worker manifest in order to add or edit secret that can be used for deployment or/and execution.
Secrets are store encrypted with a master password that will be requested by the command.
Once secrets are added to the manifest the master password will be required by the deploy
and test-run
commands.
Add a secret name my-secret
to a worker initialized in the current directory.
The JFrog Security documentation has a new home! You can now find it , including sections on:
Authenticating using OpenID Connect (OIDC)
The JFrog Security documentation has a new home! You can now find it , including sections on:
This command can be used to verify that Artifactory is accessible by sending an applicative ping to Artifactory.
Ping the configured default Artifactory server.
Ping the configured Artifactory server with ID rt-server-1.
Ping the Artifactory server. accessible through the specified URL.
The JFrog Security documentation has a new home! You can now find it , including sections on:
This page is about the integration of JFrog Platform Services with JFrog CLI.
Read more about JFrog CLI .
Managing JFrog Workers
Workers is a JFrog Platform service that allows you to extend and control your execution flows. It provides a serverless execution environment. You can create workers to enhance the platform's functionality. Workers are triggered automatically by events within the JFrog Platform, giving you the flexibility to address specific use cases. For on-demand tasks, configure Http-triggered workers.
You can read more about JFrog Workers .
The JFrog Security documentation has a new home! You can now find it , including sections on:
The JFrog Security documentation has a new home! You can now find it , including sections on:
The JFrog Security documentation has a new home! You can now find it , including sections on:
The JFrog Security documentation has a new home! You can now find it , including sections on:
The JFrog Security documentation has a new home! You can now find it , including sections on:
The JFrog Security documentation has a new home! You can now find it , including sections on:
Install Frogbot on GitLab repositories using GitLab CI
The JFrog Security documentation has a new home! You can now find it , including sections on:
The JFrog Security documentation has a new home! You can now find it , including sections on:
The JFrog Security documentation has a new home! You can now find it , including sections on:
The JFrog Security documentation has a new home! You can now find it , including sections on:
The JFrog Security documentation has a new home! You can now find it , including sections on:
The JFrog Security documentation has a new home! You can now find it , including sections on:
The JFrog Security documentation has a new home! You can now find it , including sections on:
The JFrog Security documentation has a new home! You can now find it , including sections on:
The JFrog Security documentation has a new home! You can now find it , including sections on:
The JFrog Security documentation has a new home! You can now find it , including sections on:
Command name
worker init
Abbreviation
worker i
Command options:
--server-id
[Optional] Server ID configured using the config command.
--timeout-ms
[Default: 5000] The request timeout in milliseconds.
--force
[Default: false] Whether to overwrite existing files.
--no-test
[Default: false] Whether to skip test generation.
--application
[Optional] The application that provides the event. If omitted the service will try to guess it and raise an error if no application is found.
--project-key
[Optional] The key of the project that the worker should belong to.
Command arguments:
action
The name of the action to init (eg: BEFORE_DOWNLOAD). To have the list of all available actions use jf worker list-event
.
worker-name
The name of the worker.
jf worker init BEFORE_DOWNLOAD my-worker
Command name
worker test-run
Abbreviation
worker dry-run, worker dr, worker tr
Command options:
--server-id
[Optional] Server ID configured using the config command.
--timeout-ms
[Default: 5000] The request timeout in milliseconds.
--no-secrets
[Default: false] Do not use registered secrets.
Command arguments:
json-payload
The json payload expected by the worker. Use -
to read the payload from standard input. Use @<file-path>
to read from a file located at .
jf worker dry-run @payload.json
Command name
worker deploy
Abbreviation
worker d
Command options:
--server-id
[Optional] Server ID configured using the config command.
--timeout-ms
[Default: 5000] The request timeout in milliseconds.
--no-secrets
[Default: false] Do not use registered secrets.
jf worker server deploy --server-id my-server
Command name
worker list-event
Abbreviation
worker le
Command options:
--server-id
[Optional] Server ID configured using the config command.
--timeout-ms
[Default: 5000] The request timeout in milliseconds.
--project-key
[Optional] List events available to a specific project.
jf worker list-event --server-id my-server
Command name
worker execution-history
Abbreviation
worker exec-hist, worker eh
Command options:
--server-id
[Optional] Server ID configured using the config command.
--timeout-ms
[Default: 5000] The request timeout in milliseconds.
--project-key
[Optional] List events available to a specific project.
--with-test-runs
[Default: false] Whether to include test-runs entries.
Command arguments:
worker-key
[Optional] The worker key. If not provided it will be read from the manifest.json
in the current directory.
jf worker execution-history --with-test-runs my-worker
Command name
worker add-secret
Abbreviation
worker as
Command options:
--edit
[Default: false] Whether to update an existing secret.
Command arguments:
secret-name
The secret name
jf worker add-secret my-secret
Command name
worker edit-schedule
Abbreviation
worker es
Command options:
--cron
[Mandatory] A standard cron expression with minutes resolution. Seconds resolution is not supported by Worker service.
--timezone
[Default: UTC] The timezone to use for scheduling.
jf worker edit-schedule --cron "* * * * *"
Command name
worker list
Abbreviation
worker ls
Command options:
--server-id
[Optional] Server ID configured using the config command.
--json
[Default: false] Whether to use JSON instead of CSV as output.
--timeout-ms
[Default: 5000] The request timeout in milliseconds.
--project-key
[Optional] List the events created in a specific project.
jf worker list --server-id my-platform --json
Command name
rt ping
Abbreviation
rt p
Command options:
--url
[Optional] JFrog Artifactory URL. (example: https://acme.jfrog.io/artifactory)
--server-id
[Optional] Server ID configured using the jf c add command. If not specified, the default configured Artifactory server is used.
--insecure-tls
[Default: false] Set to true to skip TLS certificates verification.
Command arguments:
The command accepts no arguments.
jf rt ping
jf rt ping --server-id=rt-server-1
jf rt ping --url=https://my-rt-server.com/artifactory
When used with Artifactory, JFrog CLI offers several means of authentication: JFrog CLI does not support accessing Artifactory without authentication.
To authenticate yourself using your JFrog login credentials, either configure your credentials once using the jf c add command or provide the following option to each command.
--url
JFrog Artifactory API endpoint URL. It usually ends with /artifactory
--user
JFrog username
--password
JFrog password or API key
For enhanced security, when JFrog CLI is configured to use a username and password / API key, it automatically generates an access token to authenticate with Artifactory. The generated access token is valid for one hour only. JFrog CLI automatically refreshed the token before it expires. The jf c add command allows disabling this functionality. This feature is currently not supported by commands which use external tools or package managers or work with JFrog Distribution.
To authenticate yourself using an Artifactory Access Token, either configure your Access Token once using the jf c add command or provide the following option to each command.
--url
JFrog Artifactory API endpoint URL. It usually ends with /artifactory
--access-token
JFrog access token
Note
Currently, authentication with RSA keys is not supported when working with external package managers and build tools (Maven, Gradle, Npm, Docker, Go and NuGet) or with the cUrl integration.
From version 4.4, Artifactory supports SSH authentication using RSA public and private keys. To authenticate yourself to Artifactory using RSA keys, execute the following instructions:
Enable SSH authentication as described in Configuring SSH.
Configure your Artifactory URL to have the following format: ssh://[host]:[port]
There are two ways to do this:
For each command, use the --url
command option.
Specify the Artifactory URL in the correct format using the jf c add command.
Warning Don't include your Artifactory context URL
Make sure that the [host] component of the URL only includes the hostname or the IP, but not your Artifactory context URL.
Configure the path to your SSH key file. There are two ways to do this:
For each command, use the --ssh-key-path
command option.
Specify the path using the jf c add command.
From Artifactory release 7.38.4, you can authenticate users using a client certificate (mTLS). To do so will require a reverse proxy and some setup on the front reverse proxy (Nginx). Read about how to set this up here.
To authenticate with the proxy using a client certificate, either configure your certificate once using the jf c add command or use the --client-cert-path
and--client-cert-ket-path
command options with each command.
Note
Authentication using client certificates (mTLS) is not supported by commands which integrate with package managers.
Not Using a Public CA (Certificate Authority)?
This section is relevant for you if you're not using a public CA (Certificate Authority) to issue the SSL certificate used to connect to your Artifactory domain. You may not be using a public CA either because you're using self-signed certificates or you're running your own PKI services in-house (often by using a Microsoft CA).
In this case, you'll need to make those certificates available for JFrog CLI, by placing them inside the security/certs directory, which is under JFrog CLI's home directory. By default, the home directory is ~/.jfrog, but it can be also set using the JFROG_CLI_HOME_DIR environment variable.
Note
The supported certificate format is PEM. Make sure to have ONE file with the ending .pem. OR provide as many as you want and run the c_rehash command on the folder as follows :c_rehash ~/.jfrog/security/certs/
.
Some commands support the --insecure-tls option, which skips the TLS certificates verification.
Before version 1.37.0, JFrog CLI expected the certificates to be located directly under the security directory. JFrog CLI will automatically move the certificates to the new directory when installing version 1.37.0 or above. Downgrading back to an older version requires replacing the configuration directory manually. You'll find a backup if the old configuration under .jfrog/backup
Execute an HTTP-triggered worker.
Command name
worker execute
Abbreviation
worker exec, worker e
Command options:
--server-id
[Optional] Server ID configured using the config command.
--timeout-ms
[Default: 5000] The request timeout in milliseconds.
--project-key
[Optional] The key of the project that the worker belongs to.
Command arguments:
worker-key
The worker key. If not provided it will be read from the manifest.json
in the current directory.
json-payload
The json payload expected by the worker. Use -
to read the payload from standard input. Use @<file-path>
to read from a file located at .
Execute an HTTP-triggered worker initialized in the current directory, with a payload located in a file named payload.json
from the same directory.
jf worker execute @payload.json
Execute an HTTP-triggered worker with a payload from the standard input.
jf worker execute - <<EOF
{
“a”: “key”,
“an-integer”: 14
}
EOF
Execute an HTTP-triggered worker by providing the payload as an argument.
jf worker execute ‘{“my”:”payload”}’
Execute a cURL command, using the configured Artifactory details. The command expects the cUrl client to be included in the PATH.
Note - This command supports only Artifactory REST APIs, which are accessible under https://<JFrog base URL>/artifactory/api/
Command name
rt curl
Abbreviation
rt cl
Command options:
--server-id
[Optional] Server ID configured using the jf c add command. If not specified, the default configured server is used.
Command arguments:
cUrl arguments and flags
The same list of arguments and flags passed to cUrl, except for the following changes: 1. The full Artifactory URL should not be passed. Instead, the REST endpoint URI should be sent. 2. The login credentials should not be passed. Instead, the --server-id should be used.
Currently only servers configured with username and password / API key are supported.
Execute the cUrl client, to send a GET request to the /api/build endpoint to the default Artifactory server
jf rt curl -XGET /api/build
Execute the cUrl client, to send a GET request to the /api/build endpoint to the configured my-rt-server server ID.
jf rt curl -XGET /api/build --server-id my-rt-server
This command is used to remove a registered worker from you Artifactory instance.
Command name
worker undeploy
Abbreviation
worker rm
Command options:
--server-id
[Optional] Server ID configured using the config command.
--timeout-ms
[Default: 5000] The request timeout in milliseconds.
--project-key
[Optional] The key of the project that the worker belongs to.
Command arguments:
worker-key
[Optional] The worker key. If not provided it will be read from the manifest.json
in the current directory.
Undeploy a worker named my-worker
from an Artifactory instance identified by my-server
.
jf worker undeploy --server-id my-server my-worker
JFrog CLI Plugins allow enhancing the functionality of JFrog CLI to meet the specific user and organization needs. The source code of a plugin is maintained as an open source Go project on GitHub. All public plugins are registered in JFrog CLI's Plugins Registry. We encourage you, as developers, to create plugins and share them publicly with the rest of the community. When a plugin is included in the registry, it becomes publicly available and can be installed using JFrog CLI. Read the JFrog CLI Plugins Developer Guide if you wish to create and publish your own plugins.
A plugin which is included JFrog CLI's Plugins Registry can be installed using the following command.
$ jf plugin install the-plugin-name
This command will install the plugin from the official public registry by default. You can also install a plugin from a private JFrog CLI Plugin registry, as described in the Private Plugins Registries section.
In addition to the public official JFrog CLI Plugins Registry, JFrog CLI supports publishing and installing plugins to and from private JFrog CLI Plugins Registries. A private registry can be hosted on any Artifactory server. It uses a local generic Artifactory repository for storing the plugins.
To create your own private plugins registry, follow these steps.
On your Artifactory server, create a local generic repository named jfrog-cli-plugins.
Make sure your Artifactory server is included in JFrog CLI's configuration, by running the jf c show command.
If needed, configure your Artifactory instance using the jf c add command.
Set the ID of the configured server as the value of the JFROG_CLI_PLUGINS_SERVER environment variable.
If you wish the name of the plugins repository to be different from jfrog-cli-plugins, set this name as the value of the JFROG_CLI_PLUGINS_REPO environment variable.
The jf plugin install command will now install plugins stored in your private registry.
To publish a plugin to the private registry, run the following command, while inside the root of the plugin's sources directory. This command will build the sources of the plugin for all the supported operating systems. All binaries will be uploaded to the configured registry.
jf plugin publish the-plugin-name the-plugin-version
To use the CLI, install it on your local machine, or download its executable, place it anywhere in your file system and add its location to your PATH environment variable.
Environment Variables
The jf options command displays all the supported environment variables.
Variable Name
Description
Usage Example
JFROG_CLI_LOG_LEVEL
Determines the log level of the JFrog CLI. Possible values are: DEBUG, INFO, WARN, and ERROR. If set to ERROR, logs contain error messages only. This is useful for reading or parsing CLI output without additional information.
This can be useful in troubleshooting issues or monitoring CLI activity more closely during artifact uploads or downloads.
JFROG_CLI_LOG_TIMESTAMP
Controls the log messages timestamp format. Possible values are: TIME, DATE_AND_TIME, and OFF.
This is useful for maintaining logs that show when each command was executed, which is helpful for audit trails.
JFROG_CLI_HOME_DIR
Defines the JFrog CLI home directory path.
This changes the default configuration and cache location, useful for organizing settings when working on multiple projects.
JFROG_CLI_TEMP_DIR
Defines the temp directory used by JFrog CLI. The default is the operating system's temp directory.
If you wish to use a specific temporary directory for CLI operations, use this variable.
JFROG_CLI_BUILD_NAME
Specifies the build name used by commands expecting a build name unless sent as a command argument or option.
This enables tracking and associating artifacts with a specific build in CI/CD pipelines.
JFROG_CLI_BUILD_NUMBER
Specifies the build number used by commands expecting a build number unless sent as a command argument or option.
This is generally paired with the build name to track build artifacts in CI/CD workflows.
JFROG_CLI_BUILD_PROJECT
Sets the Artifactory project key.
This associates your builds with a specific project key in Artifactory, making it easier to manage multiple projects.
JFROG_CLI_SERVER_ID
Server ID configured using the config command.
This allows you to reference a configured server for all JFrog CLI commands without needing to specify it each time.
CI
Disables interactive prompts and progress bar when set to true.
Useful for automation in CI/CD pipelines, where user interaction is not possible or desired.
JFROG_CLI_PLUGINS_SERVER
Configured Artifactory server ID from which to download JFrog CLI Plugins.
Helps configure which server to use when downloading custom plugins.
JFROG_CLI_PLUGINS_REPO
Determines the name of the local repository for JFrog CLI Plugins.
Pair this with the plugins server to specify a custom repository for plugin management.
JFROG_CLI_TRANSITIVE_DOWNLOAD
Set this option to true to include remote repositories in artifact searches when using the rt download command.
Useful in CI setups to ensure you're pulling artifacts from all relevant sources, including remote repositories.
JFROG_CLI_RELEASES_REPO
Configured Artifactory repository name from which to download the jar needed by mvn/gradle.
This allows calling Maven or Gradle related commands through JFrog CLI effectively, ensuring the necessary library jars are accessible.
JFROG_CLI_DEPENDENCIES_DIR
Defines the directory to which JFrog CLI's internal dependencies are downloaded.
Use this when you have specific requirements or restrictions on where dependencies should be stored.
JFROG_CLI_MIN_CHECKSUM_DEPLOY_SIZE_KB
Minimum file size in KB for which JFrog CLI performs checksum deploy optimization.
Adjust this if you want to skip checksum operations for smaller files, which can speed up upload operations in certain scenarios.
JFROG_CLI_UPLOAD_EMPTY_ARCHIVE
Set to true to upload an empty archive when --archive is set but all files are excluded by exclusion patterns.
Helps maintain the structure in Artifactory even when no files need to be included in an upload.
JFROG_CLI_BUILD_URL
Sets the CI server build URL in the build-info. Used in rt build-publish command unless overridden by the --build-url option.
Enables linking Artifactory items back to the original CI/CD build process.
JFROG_CLI_ENV_EXCLUDE
List of case-insensitive patterns for environment variables to exclude.
This is important for building security-sensitive projects where you need to hide certain variables from being published as part of build info.
JFROG_CLI_FAIL_NO_OP
Set to true if you want the command to return exit code 2 when no files are affected.
Valuable in CI pipelines to indicate if a command has executed without affecting any files, allowing you to implement logic based on file mutations.
JFROG_CLI_ENCRYPTION_KEY
If provided, encrypts sensitive data in the config with the key (must be exactly 32 characters long).
Use this to protect sensitive information such as API keys in your configuration files.
JFROG_CLI_AVOID_NEW_VERSION_WARNING
Set to true to avoid checking for the latest available JFrog CLI version and printing a warning if a newer version exists.
This can be used in CI environments to prevent unnecessary console output warnings about new versions.
JFROG_CLI_COMMAND_SUMMARY_OUTPUT_DIR
Defines the directory path where command summaries data is stored, with every command having its own directory.
This helps in organizing command summaries for review or debugging purposes later on.
JFROG_CLI_ANALYZER_MANAGER_VERSION
Specifies the version of Analyzer Manager to use for security commands, provided in semantic versioning format (e.g., 1.13.4). By default, the latest stable version is used.
Important for projects where specific analyzer features or behavior in certain versions are required.
Note: Always consider security best practices when handling sensitive information, particularly when using environment variables in CI/CD pipelines.
The Command Summaries feature enables the recording of JFrog CLI command outputs into the local file system. This functionality can be used to generate a summary in the context of an entire workflow (a sequence of JFrog CLI commands) and not only in the scope of a specific command.
An instance of how Command Summaries are utilized can be observed in the setup-cli GitHub action. This action employs the compiled markdown to generate a comprehensive summary of the entire workflow.
jf rt build-publish
jf rt upload
jf scan
jf build-scan
Each command execution that incorporates this feature can save data files into the file system. These files are then used to create an aggregated summary in Markdown format.
Saving data to the filesystem is essential because CLI command executes in separate contexts. Consequently, each command that records new data should also incorporate any existing data into the aggregated markdown. This is required because the CLI cannot determine when a command will be the last one executed in a sequence of commands.
The CLI does not automatically remove the files as they are designed to remain beyond a single execution. As a result, it is your responsibility to you to manage your pipelines and delete files as necessary. You can clear the entire directory of JFROG_CLI_COMMAND_SUMMARY_OUTPUT_DIR
that you have configured to activate this feature.
To use the Command Summaries, you'll need to set the JFROG_CLI_COMMAND_SUMMARY_OUTPUT_DIR
environment variable. This variable designates the directory where the data files and markdown files will be stored.
If you wish to contribute a new CLI command summary to the existing ones, you can submit a pull request once you've followed these implementation guidelines:
Implement the CommandSummaryInterface
Record data during runtime
type CommandSummaryInterface interface {
GenerateMarkdownFromFiles(dataFilePaths []string) (finalMarkdown string, err error)
}
// Initialize your implementation
myNewCommandSummary, err := commandsummary.New(&MyCommandStruct{}, "myNewCommandSummary")
if err != nil {
return
}
// Record
return myNewCommandSummary.Record(data)
The GenerateMarkdownFromFiles
function needs to process multiple data files, which are the results of previous command executions, and generate a single markdown string content. As each CLI command has its own context, we need to regenerate the entire markdown with the newly added results each time.
// Step 1. Implement the CommandSummaryInterface
type CommandStruct struct{}
type singleRecordedObject struct {
Name string
}
func (cs *CommandStruct) GenerateMarkdownFromFiles(dataFilePaths []string) (markdown string, err error) {
// Aggregate all the results into a slice
var recordedObjects []*singleRecordedObject
for _, path := range dataFilePaths {
var singleObject singleRecordedObject
if err = commandsummary.UnmarshalFromFilePath(path, &singleObject); err != nil {
return
}
recordedObjects = append(recordedObjects, &singleObject)
}
// Create markdown
markdown = results.String()
return
}
// Step 2. Record data during runtime
func recordCommandSummary(data any) (err error) {
if !commandsummary.ShouldRecordSummary() {
return
}
commandSummaryImplementation, err := commandsummary.New(&CommandStruct{}, "CommandName")
if err != nil {
return
}
return commandSummaryImplementation.Record(data)
}
Each command that implements the CommandSummaryInterface
will have its own subdirectory inside the JFROG_CLI_COMMAND_SUMMARY_OUTPUT_DIR/JFROG_COMMAND_SUMMARY
directory.
Every subdirectory will house data files, each one corresponding to a command recording, along with a markdown file that has been created from all the data files. The function implemented by the user is responsible for processing all the data files within its respective subdirectory and generating a markdown string.
JFROG_CLI_COMMAND_SUMMARY_OUTPUT_DIR/JFROG_COMMAND_SUMMARY
│
└─── Command1
│ datafile1.txt
│ datafile2.txt
│ markdown.txt
│
└─── Command2
datafile1.txt
datafile2.txt
markdown.txt
The JFrog Security documentation has a new home! You can now find it , including sections on:
The JFrog Security documentation has a new home! You can now find it , including sections on:
JFrog CLI v2 was launched in July 2021. It includes changes to the functionality and usage of some of the legacy JFrog CLI commands. The changes are the result of feedback we received from users over time through GitHub, making the usage and functionality easier and more intuitive. For example, some of the default values changed, and are now more consistent across different commands. We also took this opportunity for improving and restructuring the code, as well as replacing old and deprecated functionality.
Most of the changes included in v2 are breaking changes compared to the v1 releases. We therefore packaged and released these changes under JFrog CLI v2, allowing users to migrate to v2 only when they are ready.
New enhancements to JFrog CLI are planned to be introduced as part of V2 only. V1 receives very little development attention nowadays. We therefore encourage users who haven't yet migrated to V2, to do so.
The default value of the --flat option is now set to false for the jfrog rt upload command.
The deprecated syntax of the jfrog rt mvn command is no longer supported. To use the new syntax, the project needs to be first configured using the jfrog rt mvnc command.
The deprecated syntax of the jfrog rt gradle command is no longer supported. To use the new syntax, the project needs to be first configured using the jfrog rt gradlec command.
The deprecated syntax of the jfrog rt npm and jfrog rt npm-ci commands is no longer supported. To use the new syntax, the project needs to be first configured using the jfrog rt npmc command.
The deprecated syntax of the jfrog rt go command is no longer supported. To use the new syntax, the project needs to be first configured using the jfrog rt go-config command.
The deprecated syntax of the jfrog rt nuget command is no longer supported. To use the new syntax, the project needs to be first configured using the jfrog rt nugetc command.
All Bintray commands are removed.
The jfrog rt config command is removed and replaced by the jfrog config add command.
The jfrog rt use command is removed and replaced with the jfrog config use.
The --props command option and props file spec property for the jfrog rt upload command are removed, and replaced with the --target-props command option and targetProps file spec property respectively.
The following commands are removed
jfrog rt release-bundle-create
jfrog rt release-bundle-delete
jfrog rt release-bundle-distribute
jfrog rt release-bundle-sign
jfrog rt release-bundle-update
and replaced with the following commands respectively
jfrog ds release-bundle-create
jfrog ds release-bundle-delete
jfrog ds release-bundle-distribute
jfrog ds release-bundle-sign
jfrog ds release-bundle-update
The jfrog rt go-publish command now only supports Artifactory version 6.10.0 and above. Also, the command no longer accepts the target repository as an argument. The target repository should be pre-configured using the jfrog rt go-config command.
The jfrog rt go command no longer falls back to the VCS when dependencies are not found in Artifactory.
The --deps, --publish-deps, --no-registry and --self options of the jfrog rt go-publish command are now removed.
The --apiKey option is now removed. The API key should now be passed as the value of the --password option.
The --exclude-patterns option is now removed, and replaced with the --exclusions option. The same is true for the excludePatterns file spec property, which is replaced with the exclusions property.
The JFROG_CLI_JCENTER_REMOTE_SERVER and JFROG_CLI_JCENTER_REMOTE_REPO environment variables are now removed and replaced with the JFROG_CLI_EXTRACTORS_REMOTE environment variable.
The JFROG_CLI_HOME environment variable is now removed and replaced with the JFROG_CLI_HOME_DIR environment variable.
The JFROG_CLI_OFFER_CONFIG environment variable is now removed and replaced with the CI environment variable. Setting CI to true disables all prompts.
The directory structure is now changed when the jfrog rt download command is used with placeholders and --flat=false (--flat=false is now the default). When placeholders are used, the value of the --flat option is ignored.
When the jfrog rt upload command now uploads symlinks to Artifactory, the target file referenced by the symlink is uploaded to Artifactory with the symlink name. If the --symlink options is used, the symlink itself (not the referenced file) is uploaded, with the referenced file as a property attached to the file.
To download the executable, please visit the JFrog CLI Download Site.
You can also download the sources from the JFrog CLI Project on GitHub where you will also find instructions on how to build JFrog CLI.
The legacy name of JFrog CLI's executable is jfrog. In an effort to make the CLI usage easier and more convenient, we recently exposed a series of new installers, which install JFrog CLI with the new jf executable name. For backward compatibility, the old installers will remain available. We recommend however migrating to the newer jf executable name.
The following installers are available for JFrog CLI v2. These installers make JFrog CLI available through the jf executable.
# Create the keyrings directory if it doesn't exist
sudo mkdir -p /usr/share/keyrings;
# Download and save the JFrog GPG key to a keyring file
curl -fsSL https://releases.jfrog.io/artifactory/api/v2/repositories/jfrog-debs/keyPairs/primary/public | sudo gpg --dearmor -o /usr/share/keyrings/jfrog.gpg
# Add the JFrog repository to your APT sources with the signed-by option
echo "deb [signed-by=/usr/share/keyrings/jfrog.gpg] https://releases.jfrog.io/artifactory/jfrog-debs focal contrib" | sudo tee /etc/apt/sources.list.d/jfrog.list
# Update the package list
sudo apt update;
# Install the JFrog CLI
sudo apt install -y jfrog-cli-v2-jf;
# Run the JFrog CLI intro command
jf intro;
# Create and configure the JFrog CLI YUM repository
echo "[jfrog-cli]" > jfrog-cli.repo &&
echo "name=JFrog CLI" >> jfrog-cli.repo &&
echo "baseurl=https://releases.jfrog.io/artifactory/jfrog-rpms" >> jfrog-cli.repo &&
echo "enabled=1" >> jfrog-cli.repo &&
echo "gpgcheck=1" >> jfrog-cli.repo &&
# Import GPG keys for verifying packages
# Note: Two keys are imported for backward compatibility with older versions
rpm --import https://releases.jfrog.io/artifactory/api/v2/repositories/jfrog-rpms/keyPairs/primary/public &&
rpm --import https://releases.jfrog.io/artifactory/api/v2/repositories/jfrog-rpms/keyPairs/secondary/public &&
# Move the repository file to the YUM configuration directory
sudo mv jfrog-cli.repo /etc/yum.repos.d/ &&
# Install the JFrog CLI package
yum install -y jfrog-cli-v2-jf &&
# Display an introductory message for JFrog CLI
jf intro
brew install jfrog-cli
curl -fL https://install-cli.jfrog.io | sh
curl -fL https://getcli.jfrog.io/v2-jf | sh
Note: If you are using any shim-based version managers (like Volta, nvm, etc.) for a package, it is advised to avoid using npm
-based installation; instead, please use other installation options JFrog provides.
npm install -g jfrog-cli-v2-jf
Slim: docker run releases-docker.jfrog.io/jfrog/jfrog-cli-v2-jf jf -v Full: docker run releases-docker.jfrog.io/jfrog/jfrog-cli-full-v2-jf jf -v
powershell: "Start-Process -Wait -Verb RunAs powershell '-NoProfile iwr https://releases.jfrog.io/artifactory/jfrog-cli/v2-jf/\[RELEASE\]/jfrog-cli-windows-amd64/jf.exe -OutFile $env:SYSTEMROOT\\system32\\jf.exe'"
choco install: jfrog-cli-v2-jf
The following installers are available for JFrog CLI v2. These installers make JFrog CLI available through the jfrog executable.
wget -qO - https://releases.jfrog.io/artifactory/jfrog-gpg-public/jfrog\_public\_gpg.key | sudo apt-key add - echo "deb https://releases.jfrog.io/artifactory/jfrog-debs xenial contrib" | sudo tee -a /etc/apt/sources.list; apt update; apt install -y jfrog-cli-v2;
echo "\[jfrog-cli\]" > jfrog-cli.repo; echo "name=jfrog-cli" >> jfrog-cli.repo; echo "baseurl=https://releases.jfrog.io/artifactory/jfrog-rpms" >> jfrog-cli.repo; echo "enabled=1" >> jfrog-cli.repo; rpm --import https://releases.jfrog.io/artifactory/jfrog-gpg-public/jfrog\_public\_gpg.key sudo mv jfrog-cli.repo /etc/yum.repos.d/; yum install -y jfrog-cli-v2;
brew install jfrog-cli
curl -fL https://getcli.jfrog.io/v2 | sh
Note: If you are using any shim-based version managers (like Volta, nvm, etc.) for a package, it is advised to avoid using npm
-based installation; instead, please use other installation options JFrog provides.
npm install -g jfrog-cli-v2
Slim: docker run releases-docker.jfrog.io/jfrog/jfrog-cli-v2 jfrog -v Full: docker run releases-docker.jfrog.io/jfrog/jfrog-cli-full-v2 jfrog -v
choco install jfrog-cli
The following installations are available for JFrog CLI v1. These installers make JFrog CLI available through the jfrog executable.
wget -qO - https://releases.jfrog.io/artifactory/api/v2/repositories/jfrog-debs/keyPairs/primary/public | sudo apt-key add - echo "deb https://releases.jfrog.io/artifactory/jfrog-debs xenial contrib" | sudo tee -a /etc/apt/sources.list; apt update; apt install -y jfrog-cli;
echo "\[jfrog-cli\]" > jfrog-cli.repo; echo "name=jfrog-cli" >> jfrog-cli.repo; echo "baseurl=https://releases.jfrog.io/artifactory/jfrog-rpms" >> jfrog-cli.repo; echo "enabled=1" >> jfrog-cli.repo; rpm --import https://releases.jfrog.io/artifactory/api/v2/repositories/jfrog-rpms/keyPairs/primary/public; rpm --import https://releases.jfrog.io/artifactory/api/v2/repositories/jfrog-rpms/keyPairs/secondary/public sudo mv jfrog-cli.repo /etc/yum.repos.d/; yum install -y jfrog-cli;
curl -fL https://getcli.jfrog.io | sh
Note: If you are using any shim-based version managers (like Volta, nvm, etc.) for a package, it is advised to avoid using npm
-based installation; instead, please use other installation options JFrog provides.
npm install -g jfrog-cli-go
Slim: docker run releases-docker.jfrog.io/jfrog/jfrog-cli jfrog -v Full: docker run releases-docker.jfrog.io/jfrog/jfrog-cli-full jfrog -v
GO111MODULE=on go get github.com/jfrog/jfrog-cli; if \[ -z "$GOPATH" \] then binPath="$HOME/go/bin"; else binPath="$GOPATH/bin"; fi; mv "$binPath/jfrog-cli" "$binPath/jfrog"; echo "$($binPath/jfrog -v) is installed at $binPath";
The transfer-files command allows transferring (copying) all the files stored in one Artifactory instance to a different Artifactory instance. The command allows transferring the files stored in a single or multiple repositories. The command expects the relevant repository to already exist on the target instance and have the same name and type as the repositories on the source.
Artifacts in remote repositories caches are not transferred.
The files transfer process allows transferring files that were created or modified on the source instance after the process started. However, files that were deleted on the source instance after the process started, are not deleted on the target instance by the process.
The files transfer process allows transferring files that were created or modified on the source instance after the process started. The custom properties of those files are also updated on the target instance. However, if only the custom properties of those file were modified on the source, but not the files' content, the properties are not modified on the target instance by the process.
The source and target repositories should have the same name and type.
Since the file are pushed from the source to the target instance, the source instance must have network connection to the target.
Ensure that you can log in to the UI of both the source and target instances with users that have admin permissions and that you have the connection details (including credentials) to both instances.
Ensure that all the repositories on source Artifactory instance which files you'd like to transfer, also exist on the target instance, and have the same name and type on both instances.
Ensure that JFrog CLI is installed on a machine that has network access to both the source and target instances.
To set up the source instance for files transfer, you must install the data-transfer user plugin in the primary node of the source instance. This section guides you through the installation steps.
Install JFrog CLI on the primary node machine of the source instance as described here.
Configure the connection details of the source Artifactory instance with your admin credentials by running the following command from the terminal.
jf c add source-server
Ensure that the JFROG_HOME environment variable is set and holds the value of JFrog installation directory. It usually points to the /opt/jfrog directory. In case the variable isn't set, set its value to point to the correct directory as described in the JFrog Product Directory Structure article.
If the source instance has internet access, you can install the data-transfer user plugin on the source machine automatically by running the following command from the terminal jf rt transfer-plugin-install source-server
. If however the source instance has no internet access, install the plugin manually as described here.
Install JFrog CLI on any machine that has access to both the source and the target JFrog instances. To do this, follow the steps described here.
Run the following command to start pushing the files from all the repositories in source instance to the target instance.
jf rt transfer-files source-server target-server
This command may take a few days to push all the files, depending on your system size and your network speed. While the command is running, It displays the transfer progress visually inside the terminal.
If you're running the command in the background, you use the following command to view the transfer progress.
jf rt transfer-files --status
In case you do not wish to transfer the files from all repositories, or wish to run the transfer in phases, you can use the --include-repos
and --exclude-repos
command options. Run the following command to see the usage of these options.
jf rt transfer-files -h
If the traffic between the source and target instance needs to be routed through an HTTPS proxy, refer to this section.
You can stop the transfer process by hitting on CTRL+C if the process is running in the foreground, or by running the following command, if you're running the process in the background.
jf rt transfer-files --stop
The process will continue from the point it stopped when you re-run the command.
While the file transfer is running, monitor the load on your source instance, and if needed, reduce the transfer speed or increase it for better performance. For more information, see the Controlling the File Transfer Speed section.
A path to an errors summary file will be printed at the end of the run, referring to a generated CSV file. Each line on the summary CSV represents an error of a file that failed to be transferred. On subsequent executions of the jf rt transfer-files
command, JFrog CLI will attempt to transfer these files again.
Once the jf rt transfer-files
command finishes transferring the files, you can run it again to transfer files which were created or modified during the transfer. You can run the command as many times as needed. Subsequent executions of the command will also attempt to transfer files failed to be transferred during previous executions of the command.
Note:
Read more about how the transfer files works in this section.
To install the data-transfer user plugin on the source machine manually, follow these steps.
Download the following two files from a machine that has internet access. Download data-transfer.jar from https://releases.jfrog.io/artifactory/jfrog-releases/data-transfer/[RELEASE]/lib/data-transfer.jar and dataTransfer.groovy from https://releases.jfrog.io/artifactory/jfrog-releases/data-transfer/[RELEASE]/dataTransfer.groovy
Create a new directory on the primary node machine of the source instance and place the two files you downloaded inside this directory.
Install the data-transfer user plugin by running the following command from the terminal. Replace the [plugin files dir] token with the full path to the directory which includes the plugin files you downloaded.
jf rt transfer-plugin-install source-server --dir "[plugin files dir]"
Install JFrog CLI on your source instance by using one of the [#JFrog CLI Installers]. For example:
curl -fL https://install-cli.jfrog.io | sh
Note
If the source instance is running as a docker container, and you're not able to install JFrog CLI while inside the container, follow these steps.
Connect to the host machine through the terminal.
Download the JFrog CLI executable into the correct directory by running this command:
curl -fL https://getcli.jfrog.io/v2-jf | sh
Copy the JFrog CLI executable you've just downloaded into the container, by running the following docker command. Make sure to replace [the container name] with the name of the container.
docker cp jf [the container name]:/usr/bin/jf
Connect to the container and run the following command to ensure JFrog CLI is installed:
jf -v
The jf rt transfer-files
command pushes the files from the source instance to the target instance as follows:
The files are pushed for each repository, one by one in sequence.
For each repository, the process includes the following three phases:
Phase 1 pushes all the files in the repository to the target.
Phase 2 pushes files which have been created or modified after phase 1 started running (diffs).
Phase 3 attempts to push files which failed to be transferred in earlier phases (Phase 1 or Phase 2) or in previous executions of the command.
If Phase 1 finished running for a specific repository, and you run the jf rt transfer-files
command again, only Phase 2 and Phase 3 will be triggered. You can run the jf rt transfer-files
as many times as needed, till you are ready to move your traffic to the target instance permanently. In any subsequent run of the command, Phase 2 will transfer the newly created and modified files and Phase 3 will retry transferring files which failed to be transferred in previous phases and also in previous runs of the command.
Using Replication
To help reduce the time it takes for Phase 2 to run, you may configure Event Based Push Replication for some or all of the local repositories on the source instance. With Replication configured, when files are created or updated on the source repository, they are immediately replicated to the corresponding repository on the target instance. The replication can be configured at any time. Before, during or after the files transfer process.
You can run the jf rt transfer-files
command multiple times. This is needed to allow transferring files which have been created or updated after previous command executions. To achieve this, JFrog CLI stores the current state of the files transfer process in a directory named transfer located under the JFrog CLI home directory. You can usually find this directory at this location ~/.jfrog/transfer
.
JFrog CLI uses the state stored in this directory to avoid repeating transfer actions performed in previous executions of the command. For example, once Phase 1 is completed for a specific repository, subsequent executions of the command will skip Phase 1 and run Phase 2 and Phase 3 only.
In case you'd like to ignore the stored state, and restart the files transfer from scratch, you can add the --ignore-state
option to the jf rt transfer-files
command.
It is recommended to run the transfer-files
command from a machine that has network access to the source Artifactory URL. This allows spreading the transfer load on all the Artifactory cluster nodes. This machine should also have network access to the target Artifactory URL.
Follows these steps to installing JFrog CLI on that machine.
Install JFrog CLI by using one of the [#JFrog CLI Installers]. For example:
curl -fL https://install-cli.jfrog.io | sh
If your source instance is accessible only through an HTTP/HTTPS proxy, set the proxy environment variable as described [#here].
Configure the connection details of the source Artifactory instance with your admin credentials. Run the following command and follow the instructions.
jf c add source-server
Configure the connection details of the target Artifactory instance as follows.
jf c add target-server
The jf rt transfer-files
command pushes the binaries from the source instance to the target instance. This transfer can take days, depending on the size of the total data transferred, the network bandwidth between the source and the target instance, and additional factors.
Since the process is expected to run while the source instance is still being used, monitor the instance to ensure that the transfer does not add too much load to it. Also, you might decide to increase the load for faster a transfer rate, while you monitor the transfer. This section describes how to control the file transfer speed.
By default, the jf rt transfer-files
command uses 8 working threads to push files from the source instance to the target instance. Reducing this value will cause slower transfer speed and lower load on the source instance, and increasing it will do the opposite. We therefore recommend increasing it gradually. This value can be changed while the jf rt transfer-files
command is running. There's no need to stop the process to change the total of working threads. The new value set will be cached by JFrog CLI and also used for subsequent runs from the same machine. To set the value, simply run the following interactive command from a new terminal window on the same machine which runs the jf rt transfer-files
command.
jf rt transfer-settings
Build-info repositories
When transferring files in build-info repositories, JFrog CLI limits the total of working threads to 8. This is done in order to limit the load on the target instance while transferring build-info.
The jf rt transfer-files
command pushes the files directly from the source to the target instance over the network. In case the traffic from the source instance needs to be routed through an HTTPS proxy, follow these steps.
Define the proxy details in the source instance UI as described in the Managing Proxies documentation.
When running the jf rt transfer-files
command, add the --proxy-key
option to the command, with Proxy Key you configured in the UI as the option value. For example, if the Proxy Key you configured is my-proxy-key, run the command as follows:
jf rt transfer-files my-source my-target --proxy-key my-proxy-key
You can use the jf login
command to authenticate with the JFrog Platform through the web browser. This command is solely interactive, meaning it does not receive any options and cannot be used in a CI server.
This command allows creating for users in the JFrog Platform. By default, a user-scoped token will be created. Administrators may provide the scope explicitly with '--scope', or implicitly with '--groups', '--grant-admin'.
Create an access token for the user in the default server configured by the command:
Create an access token for the user with the toad username:
The config add and config edit commands are used to add and edit JFrog Platform server configuration, stored in JFrog CLI's configuration storage. These configured servers can be used by the other commands. The configured servers' details can be overridden per command by passing in alternative values for the URL and login credentials. The values configured are saved in file under the JFrog CLI home directory.
The config remove command is used to remove JFrog Platform server configuration, stored in JFrog CLI's configuration storage.
The config show command shows the stored configuration. You may show a specific server's configuration by sending its ID as an argument to the command.
The config use command sets a configured server as default. The following commands will use this server.
The config export command generates a token, which stores the server configuration. This token can be used by the config import command, to import the configuration stored in the token, and save it in JFrog CLI's configuration storage.
Starting from version 1.37.0, JFrog CLI introduces support for encrypting sensitive data stored in its configuration using an encryption key stored in a file. Follow these steps to enable encryption:
Generate a random 32-character master key. Ensure that the key size is exactly 32 characters. For example: f84hc22dQfhe9f8ydFwfsdn48!wejh8A
Create a file named security.yaml under ~/.jfrog/security.
If you've customized the default JFrog CLI home directory by setting the JFROG_CLI_HOME_DIR environment variable, create the security/security.yaml file under the configured home directory.
Add the generated master key to the security.yaml file:
Ensure that the security.yaml file has only read permissions for the user running JFrog CLI.
The configuration will be encrypted the next time JFrog CLI accesses the config. If you have existing configurations stored before creating the file, you'll need to reconfigure the servers stored in the config.
Warning: When upgrading JFrog CLI from a version prior to 1.37.0 to version 1.37.0 or above, automatic changes are made to the content of the ~/.jfrog directory to support the new functionality introduced. Before making these changes, the content of the ~/.jfrog directory is backed up inside the ~/.jfrog/backup directory. After enabling sensitive data encryption, it is recommended to remove the backup directory to ensure no sensitive data is left unencrypted.
Starting from version 2.36.0, JFrog CLI also supports encrypting sensitive data in its configuration using an encryption key stored in an environment variable. To enable encryption, follow these steps:
Generate a random 32-character master key. Ensure that the key size is exactly 32 characters. For example: f84hc22dQfhe9f8ydFwfsdn48!wejh8A
Store the key in an environment variable named JFROG_CLI_ENCRYPTION_KEY.
The configuration will be encrypted the next time JFrog CLI attempts to access the config. If you have configurations already stored before setting the environment variable, you'll need to reconfigure the servers stored in the config.
allow enhancing the functionality of JFrog CLI to meet the specific user and organization needs. The source code of a plugin is maintained as an open source Go project on GitHub. All public plugins are registered in . We encourage you, as developers, to create plugins and share them publicly with the rest of the community. When a plugin is included in the registry, it becomes publicly available and can be installed using JFrog CLI. Version 1.41.1 or above is required. Plugins can be installed using the following JFrog CLI command:
This article guides you through the process of creating and publishing your own JFrog CLI Plugin.
Make sure Go 1.17 or above is installed on your local machine and is included in your system PATH.
Make sure git is installed on your local machine and is included in your system PATH.
Go to .
Press the Use this template button to create a new repository. You may name it as you like.
Clone your new repository to your local machine. For example:
Run the following commands, to build and run the template plugin.
Open the plugin code with your favorite IDE and start having fun.
Well, plugins can do almost anything. The sky is the limit.
You have access to most of the JFrog CLI code base. This is because your plugin code depends on the module. It is a dependency declared in your project's go.mod file. Feel free to explore the jfrog-cli-core code base, and use it as part of your plugin.
You can also add other Go packages to your go.mod and use them in your code.
You can package any external resources, such as executables or configuration files, and have them published alongside your plugin. Read more about this
To make a new plugin available for anyone to use, you need to register the plugin in the JFrog CLI Plugins Registry. The registry is hosted in . The registry includes a descriptor file in YAML format for each registered plugin, inside the plugins directory. To include your plugin in the registry, create a pull request to add the plugin descriptor file for your plugin according to this file name format: your-plugin-name.yml.
To publish your plugin, you need to include it in . Please make sure your plugin meets the following guidelines before publishing it.
Read the document. You'll be asked to accept it before your plugin becomes available.
Code structure. Make sure the plugin code is structured similarly to the . Specifically, it should include a commands package, and a separate file for each command.
Tests. The plugin code should include a series of thorough tests. Use the as a reference on how the tests should be included as part of the source code. The tests should be executed using the following Go command while inside the root directory of the plugin project. Note: The Registry verifies the plugin and tries to run your plugin tests using the following command. go vet -v ./... && go test -v ./...
Code formatting. To make sure the code formatted properly, run the following go command on your plugin sources, while inside the root of your project directory. go fmt ./...
Plugin name. The plugin name should include only lower-case characters, numbers and dashes. The name length should not exceed 30 characters. It is recommended to use a short name for the users' convenience, but also make sure that its name hints on its functionality.
Create a Readme. Make sure that your plugin code includes a README.md file and place it in the root of the repository. The README needs to be structured according to the README. It needs to include all the information and relevant details for the relevant plugin users.
Consider create a tag for your plugin sources. Although this is not mandatory, we recommend creating a tag for your GitHub repository before publishing the plugin. You can then provide this tag to the Registry when publishing the plugin, to make sure the correct code is built.
Plugin version. Make sure that your built plugin has the correct version. The version is declared as part of the plugin sources. To check your plugin version, run the plugin executable with the -v option. For example: ./my-plugin -v
. The plugin version should have a v prefix. For example v1.0.0
and it should follow the semantic versioning guidelines.
Please make sure that the extension of your plugin descriptor file is yml and not yaml.
Please make sure your pull request includes only one or more plugin descriptors. Please do not add, edit or remove other files.
pluginName - The name of the plugin. This name should match the plugin name set in the plugin's code.
version - The version of the plugin. This version should have a v prefix and match the version set in the plugin's code.
repository - The plugin's code GitHub repository URL.
maintainers - The GitHub usernames of the plugin maintainers.
relativePath - If the plugin's go.mod file is not located at the root of the GitHub repository, set the relative path to this file. This path should not include the go.mod file.
branch - Optionally set an existing branch in your plugin's GitHub repository.
tag - Optionally set an existing tag in your plugin's GitHub repository.
To publish a new version of your plugin, all you need to do is create a pull request, which updates the version inside your plugin descriptor file. If needed, your change can also include either the branch or tag.
In addition to the public official JFrog CLI Plugins Registry, JFrog CLI supports publishing and installing plugins to and from private JFrog CLI Plugins Registries. A private registry can be hosted on any Artifactory server. It uses a local generic Artifactory repository for storing the plugins.
To create your own private plugins registry, follow these steps.
On your Artifactory server, create a local generic repository named jfrog-cli-plugins.
Make sure your Artifactory server is included in JFrog CLI's configuration, by running the jf c show
command.
If needed, configure your Artifactory instance using the jf c add
command.
Set the ID of the configured server as the value of the JFROG_CLI_PLUGINS_SERVER environment variable.
If you wish the name of the plugins repository to be different from jfrog-cli-plugins, set this name as the value of the JFROG_CLI_PLUGINS_REPO environment variable.
The jf plugin install
command will now install plugins stored in your private registry.
To publish a plugin to the private registry, run the following command, while inside the root of the plugin's sources directory. This command will build the sources of the plugin for all the supported operating systems. All binaries will be uploaded to the configured registry.
jf plugin publish the-plugin-name the-plugin-version
When installing a plugin using the jf plugin install
command, the plugin is downloaded into its own directory under the plugins
directory, which is located under the JFrog CLI home directory. By default, you can find the plugins
directory under ~/.jfrog/plugins/
. So if for example you are developing a plugin named my-plugin
, and you'd like to test it with JFrog CLI before publishing it, you'll need to place your plugin's executable, named my-plugin
, under the following path -
If your plugin also uses , you should place the resources under the following path -
Once the plugin's executable is there, you'll be able to see it is installed by just running jf
.
In some cases your plugin may need to use external resources. For example, the plugin code may need to run an executable or read from a configuration file. You would therefore want these resources to be packaged together with the plugin, so that when it is installed, these resources are also downloaded and become available for the plugin.
The way to include resources for your plugin, is to simply place them inside a directory named resources
at the root of the plugin's sources directory. You can create any directory structure inside resources
. When publishing the plugin, the content of the resources
directory is published alongside the plugin executable. When installing the plugin, the resources are also downloaded.
When installing a plugin, the plugin's resources are downloaded the following directory under the JFrog CLI home -
This means that during development, you'll need to make sure the resources are placed there, so that your plugin code can access them. Here's how your plugin code can access the resources directory -
Command name
access-token-create
Abbreviation
atc
Command arguments:
username
The username for which this token is created. If not specified, the token will be created for the current user.
Command options:
--audience
[Optional]
A space-separated list of the other instances or services that should accept this token identified by their Service-IDs.
--description
[Optional]
Free text token description. Useful for filtering and managing tokens. Limited to 1024 characters.
--expiry
[Optional]
The amount of time, in seconds, it would take for the token to expire. Must be non-negative. If not provided, the platform default will be used. To specify a token that never expires, set to zero. Non-admin may only set a value that is equal or lower than the platform default that was set by an administrator (1 year by default).
--grant-admin
[Default: false]
Set to true to provide admin privileges to the access token. This is only available for administrators.
--groups
[Optional]
A list of comma-separated(,) groups for the access token to be associated with. This is only available for administrators.
--project
[Optional]
The project for which this token is created. Enter the project name on which you want to apply this token.
--reference
[Default: false]
Generate a Reference Token (alias to Access Token) in addition to the full token (available from Artifactory 7.38.10).
--refreshable
[Default: false]
Set to true if you'd like the token to be refreshable. A refresh token will also be returned in order to be used to generate a new token once it expires.
--scope
[Optional]
The scope of access that the token provides. This is only available for administrators.
jf atc
jf atc toad
Command Name
config add / config edit
Abbreviation
c add / c edit
Command options:
--access-token
[Optional]
Access token.
--artifactory-url
[Optional]
JFrog Artifactory URL. (example: https://acme.jfrog.io/artifactory)
--basic-auth-only
[Default: false]
Used for Artifactory authentication. Set to true to disable replacing username and password/API key with automatically created access token that's refreshed hourly. Username and password/API key will still be used with commands which use external tools or the JFrog Distribution service. Can only be passed along with username and password/API key options.
--client-cert-key-path
[Optional]
Private key file for the client certificate in PEM format.
--client-cert-path
[Optional]
Client certificate file in PEM format.
--dist-url
[Optional]
Distribution URL. (example: https://acme.jfrog.io/distribution)
--enc-password
[Default: true] If true, the configured password will be encrypted using Artifactory's encryption API before being stored. If false, the configured password will not be encrypted.
--insecure-tls
[Default: false]
Set to true to skip TLS certificates verification, while encrypting the Artifactory password during the config process.
--interactive
[Default: true, unless $CI is true]
Set to false if you do not want the config command to be interactive.
--mission-control-url
[Optional]
JFrog Mission Control URL. (example: https://acme.jfrog.io/ms)
--password
[Optional]
JFrog Platform password.
--ssh-key-path
[Optional]
For authentication with Artifactory. SSH key file path.
--url
[Optional]
JFrog Platform URL. (example: https://acme.jfrog.io)
--user
[Optional]
JFrog Platform username.
--xray-url
[Optional] Xray URL. (example: https://acme.jfrog.io/xray)
--overwrite
[Available for config add only] [Default: false] Overwrites the instance configuration if an instance with the same ID already exists.
Command arguments:
server ID
A unique ID for the server configuration.
Command name
config remove
Abbreviation
c rm
Command options:
--quiet
[Default: $CI]
Set to true to skip the delete confirmation message.
Command arguments:
server ID
The server ID to remove. If no argument is sent, all configured servers are removed.
Command name
config show
Abbreviation
c s
Command arguments:
server ID
The ID of the server to show. If no argument is sent, all configured servers are shown.
Command name
config use
Command arguments:
server ID
The ID of the server to set as default.
Command name
config export
Abbreviation
c ex
Command arguments:
server ID
The ID of the server to export
Command name
config import
Abbreviation
c im
Command arguments:
server token
The token to import
version: 1
masterKey: "your master key"
$ jf plugin install the-plugin-name
$ git clone https://github.com/jfrog/jfrog-cli-plugin-template.git
$ cd jfrog-cli-plugin-template
$ go build -o hello-frog
$ ./hello-frog --help
$ ./hello-frog hello --help
$ ./hello-frog hello Yey!
# Mandatory:
pluginName: hello-frog
version: v1.0.0
repository: https://github.com/my-org/my-amazing-plugin
maintainers:
- github-username1
- github-username2
# Optional:
relativePath: build-info-analyzer
# You may set either branch or tag, but noth both
branch: my-release-branch
tag: my-release-tag
plugins/my-plugin/bin/
plugins/my-plugin/resources/
plugins/my-plugin/resources/
import (
"github.com/jfrog/jfrog-cli-core/v2/utils/coreutils"
)
...
dir, err := coreutils.GetJfrogPluginsResourcesDir("my-plugin-name")
...
This page describes how to use the JFrog CLI to create external files, which are then deployed to Artifactory. You can create evidence for:
Artifacts
Packages
Builds
Release Bundles v2
Note
The Evidence service requires Artifactory 7.104.2 or above.
The ability for users to attach external evidence to Artifactory, as described here, requires an Enterprise+ subscription.
The ability to collect internal evidence generated by Artifactory requires a Pro subscription or above. Internal evidence generated by Xray requires a Pro X subscription or above.
In the current release, an evidence file can be signed with one key only.
The maximum size evidence file supported by Artifactory is 16MB.
For more information about the API used for deploying evidence to Artifactory, see .
To deploy external evidence, use an access token or the web login mechanism for authentication. Basic authentication (username/password) is not supported.
JFrog CLI uses the following syntax for creating evidence:
Artifact Evidence
Package Evidence
Build Evidence
Release Bundle v2 Evidence
--predicate
file-path
Mandatory field.
Defines the path to a locally-stored, arbitrary json file that contains the predicates.
--predicate-type
predicate-type-uri
Mandatory field.
The type of predicate defined by the json file. Sample predicate type uris include:
--key
local-private-key-path
Optional path for a private key (see Tip below). Supported key types include:
Tip
You can define the key using the
JFROG_CLI_SIGNING_KEY
environment variable as an alternative to using the--key
command parameter. If the environment variable is not defined, the--key
command is mandatory.
Note
Two key formats are supported: PEM and SSH
--key-alias
RSA-1024
Optional case-sensitive name for the public key created from the private key. The public key is used to verify the DSSE envelope that contains the evidence.
If the key-alias
is included, DSSE verification will fail if the same key-name
is not found in Artifactory.
If the key-alias
is not included, DSSE verification with the public key is not performed during creation.
Tip
You can define a key alias using the
JFROG_CLI_KEY_ALIAS
environment variable as an alternative to using the--key-alias
command parameter.
Note
In the unlikely event the public key is deleted from Artifactory, it may take up to 4 hours for the Evidence service to clear the key from the cache. Evidence can still be signed with the deleted key during this time.
--markdown
md file
Optional path to a file that contains evidence formatted in markdown.
--subject-repo-path
target-path
Mandatory field.
Each evidence file must have a single subject only and must include the path. Artifacts located in local repositories aggregated inside virtual repositories are supported (evidence is added to the local path).
--subject-sha256
digest
Optional digest (sha256) of the artifact.
If a digest is provided, it is verified against the subject's sha256 as it appears in Artifactory.
If a digest is not provided, the sha256 is taken from the path in Artifactory.
--package-name
name
Mandatory field.
--package-version
version-number
Mandatory field.
--package-repo-name
repo-name
Mandatory field.
--build-name
name
Mandatory field unless environment variables are used (see tip below).
--build-number
version-number
Mandatory field unless environment variables are used (see tip below).
Tip
You can use the
FROG_CLI_BUILD_NAME
andFROG_CLI_BUILD_NUMBER
environment variables as an alternative to the build command parameters.
--release-bundle
name
Mandatory field.
--release-bundle-version
version-number
Mandatory field.
Note
When DSSE verification is successful, the following message is displayed:
When DSSE verification is unsuccessful, the following message is displayed:
Artifact Evidence Sample
In the sample above, the command creates a signed evidence file with a predicate type of SLSA provenance for an artifact named file.txt.
Package Evidence Sample
Build Evidence Sample
Release Bundle v2 Evidence Sample
jf evd create --predicate file-path --predicate-type predicate-type-uri --subject-repo-path <target-path> --subject-sha256 <digest> --key <local-private-key-path> --key-alias <public-key-name>
jf evd create --predicate file-path --predicate-type predicate-type-uri --package-name <name> --package-version <version-number> --package-repo-name <repo-name> --key <local-private-key-path> --key-alias <public-key-name>
jf evd create --predicate file-path --predicate-type predicate-type-uri --build-name <name> --build-number <version-number> --key <local-private-key-path> --key-alias <public-key-name>
jf evd create --predicate file-path --predicate-type predicate-type-uri --release-bundle <name> --release-bundle-version <version-number> --key <local-private-key-path> --key-alias <public-key-name>
{
// any kind of valid json
}
https://in-toto.io/attestation/link/v0.3
https://in-toto.io/attestation/scai/attribute-report
https://in-toto.io/attestation/runtime-trace/v0.1
https://in-toto.io/attestation/test-result/v0.1
https://in-toto.io/attestation/vulns
`rsa`
`ed25519`
`ecdsa`
Evidence successfully created and verified.
Evidence successfully created but not verified due to missing/invalid public key.
evd create --predicate /Users/jsmith/Downloads/code-review.json --predicate-type https://in-toto.io/attestation/vulns --subject-repo-path commons-dev-generic-local/commons/file.txt --subject-sha256 69d29925ba75eca8e67e0ad99d1132b47d599c206382049bc230f2edd2d3af30 --key /Users/jsmith/Documents/keys/private.pem --key-alias xyzey
evd create --predicate /Users/jsmith/Downloads/code-review.json --predicate-type https://in-toto.io/attestation/vulns --package-name DockerPackage --package-version 1.0.0 --package-repo-key local-docker --key /Users/jsmith/Documents/keys/private.pem --key-alias xyzey
evd create --predicate /Users/jsmith/Downloads/code-review.json --predicate-type https://in-toto.io/attestation/vulns --package-name DockerPackage --package-version 1.0.0 --package-repo-name local-docker --key /Users/jsmith/Documents/keys/private.pem --key-alias xyzey
evd create --predicate /Users/jsmith/Downloads/code-review.json --predicate-type https://in-toto.io/attestation/vulns --release-bundle bundledemo --release-bundle-version (mandatory) 1.0.0 --key /Users/jsmith/Documents/keys/private.pem --key-alias xyzey
To achieve complex file manipulations you may require several CLI commands. For example, you may need to upload several different sets of files to different repositories. To simplify the implementation of these complex manipulations, you can apply JFrog CLI download, upload, move, copy and delete commands with JFrog Artifactory using --spec option to replace the inline command arguments and options. Similarly, you can create and update release bundles by providing the --spec
command option. Each command uses an array of file specifications in JSON format with a corresponding schema as described in the sections below. Note that if any of these commands are issued using both inline options and the file specs, then the inline options override their counterparts specified in the file specs.
The file spec schema for the copy and move commands is as follows:
{
"files": [
{
"pattern" or "aql": "[Mandatory]",
"target": "[Mandatory]",
"props": "[Optional]",
"excludeProps": "[Optional]",
"recursive": "[Optional, Default: 'true']",
"flat": "[Optional, Default: 'false']",
"exclusions": "[Optional, Applicable only when 'pattern' is specified]",
"archiveEntries": "[Optional]",
"build": "[Optional]",
"bundle": "[Optional]",
"validateSymlinks": "[Optional]",
"sortBy": "[Optional]",
"sortOrder": "[Optional, Default: 'asc']",
"limit": "[Optional],
"offset": [Optional] }
]
}
The file spec schema for the download command is as follows:
{
"files": [
{
"pattern" or "aql": "[Mandatory]",
"target": "[Optional]",
"props": "[Optional]",
"excludeProps": "[Optional]",
"recursive": "[Optional, Default: 'true']",
"flat": "[Optional, Default: 'false']",
"exclusions": "[Optional, Applicable only when 'pattern' is specified]",
"archiveEntries": "[Optional]",
"build": "[Optional]",
"bundle": "[Optional]",
"sortBy": "[Optional]",
"sortOrder": "[Optional, Default: 'asc']",
"limit": [Optional],
"offset": [Optional] }
]
}
The file spec schema for the create and update release bundle v1 commands is as follows:
{
"files": [
{
"pattern" or "aql": "[Mandatory]",
"pathMapping": "[Optional, Applicable only when 'aql' is specified]",
"target": "[Optional]",
"props": "[Optional]",
"targetProps": "[Optional]",
"excludeProps": "[Optional]",
"recursive": "[Optional, Default: 'true']",
"flat": "[Optional, Default: 'false']",
"exclusions": "[Optional, Applicable only when 'pattern' is specified]",
"archiveEntries": "[Optional]",
"build": "[Optional]",
"bundle": "[Optional]",
"sortBy": "[Optional]",
"sortOrder": "[Optional, Default: 'asc']",
"limit": [Optional],
"offset": [Optional] }
]
}
The file spec schema for the upload command is as follows:
{
"files": [
{
"pattern": "[Mandatory]",
"target": "[Mandatory]",
"targetProps": "[Optional]",
"recursive": "[Optional, Default: 'true']",
"flat": "[Optional, Default: 'true']",
"regexp": "[Optional, Default: 'false']",
"ant": "[Optional, Default: 'false']",
"archive": "[Optional, Must be: 'zip']",
"exclusions": "[Optional]" }
]
}
The file spec schema for the search and delete commands are as follows:
{
"files": [
{
"pattern" or "aql": "[Mandatory]",
"props": "[Optional]",
"excludeProps": "[Optional]",
"recursive": "[Optional, Default: 'true']",
"exclusions": "[Optional, Applicable only when 'pattern' is specified]",
"archiveEntries": "[Optional]",
"build": "[Optional]",
"bundle": "[Optional]",
"sortBy": "[Optional]",
"sortOrder": "[Optional, Default: 'asc']",
"limit": [Optional],
"offset": [Optional] }
]
}
The following examples can help you get started using File Specs.
Download all files located under the all-my-frogs directory in the my-local-repo repository to the froggy directory.
{
"files": [
{
"pattern": "my-local-repo/all-my-frogs/",
"target": "froggy/" }
]
}
Download all files located under the all-my-frogs directory in the my-local-repo repository to the froggy directory. Download only files which are artifacts of build number 5 of build my-build .
{
"files": [
{
"pattern": "my-local-repo/all-my-frogs/",
"target": "froggy/",
"build": "my-build/5"
}
]
}
Download all files retrieved by the AQL query to the froggy directory.
{
"files": [
{
"aql": {
"items.find": {
"repo": "my-local-repo",
"$or": [
{
"$and": [
{
"path": {
"$match": "."
},
"name": {
"$match": "a1.in"
}
}
]
},
{
"$and": [
{
"path": {
"$match": "*"
},
"name": {
"$match": "a1.in"
}
}
]
}
]
}
},
"target": "froggy/"
}
]
}
All zip files located under the resources directory to the zip folder, under the all-my-frogs repository. AND
All TGZ files located under the resources directory to the **tgz folder, under the all-my-frogs repository.
Tag all zip files with type = zip and status = ready.
Tag all tgz files with type = tgz and status = ready.
{
"files": [
{
"pattern": "resources/*.zip",
"target": "all-my-frogs/zip/",
"props": "type=zip;status=ready"
},
{
"pattern": "resources/*.tgz",
"target": "all-my-frogs/tgz/",
"props": "type=tgz;status=ready"
}
]
}
Upload all zip files located under the resources directory to the zip folder, under the all-my-frogs repository.
{
"files": [
{
"pattern": "resources/*.zip",
"target": "all-my-frogs/zip/"
}
]
}
Package all files located (including subdirectories) under the resources directory into a zip archive named archive.zip , and upload it into the root of the all-my-frogs repository.
{
"files": [
{
"pattern": "resources/",
"archive": "zip",
"target": "all-my-frogs/"
}
]
}
Download all files located under the all-my-frogs directory in the my-local-repo repository except for files with .txt extension and all files inside the all-my-frogs directory with the props. prefix.`
Notice that the exclude patterns do not include the repository.
{
"files": [
{
"pattern": "my-local-repo/all-my-frogs/",
"exclusions": ["*.txt","all-my-frog/props.*"]
}
]
}
Download The latest file uploaded to the all-my-frogs directory in the my-local-repo repository.
{
"files": [
{
"pattern": "my-local-repo/all-my-frogs/",
"target": "all-my-frogs/files/",
"sortBy": ["created"],
"sortOrder": "desc",
"limit": 1
}
]
}
Search for the three largest files located under the all-my-frogs directory in the my-local-repo repository. If there are files with the same size, sort them "internally" by creation date.
{
"files": [
{
"pattern": "my-local-repo/all-my-frogs/",
"sortBy": ["size","created"],
"sortOrder": "desc",
"limit": 3
}
]
}
Download The second-latest file uploaded to the all-my-frogs directory in the my-local-repo repository.
{
"files": [
{
"pattern": "my-local-repo/all-my-frogs/",
"target": "all-my-frogs/files/",
"sortBy": ["created"],
"sortOrder": "desc",
"limit": 1,
"offset": 1
}
]
}
This example shows how to delete artifacts in artifactory under specified path based on how old they are.
The following File Spec finds all the folders which match the following criteria:
They are under the my-repo repository.
They are inside a folder with a name that matches abc-*-xyz and is located at the root of the repository.
Their name matches ver*
They were created more than 7 days ago.
{
"files": [
{
"aql": {
"items.find": {
"repo": "myrepo",
"path": {"$match":"abc-*-xyz"},
"name": {"$match":"ver*"},
"type": "folder",
"$or": [
{
"$and": [
{
"created": { "$before":"7d" }
}
]
}
]
}
}
}
]
}
This example uses Using Placeholders. For each .tgz file in the source directory, create a corresponding directory with the same name in the target repository and upload it there. For example, a file named froggy.tgz should be uploaded to my-local-rep/froggy. (froggy will be created a folder in Artifactory).
{
"files": [
{
"pattern": "(*).tgz",
"target": "my-local-repo/{1}/",
}
]
}
This examples uses Using Placeholders. Upload all files whose name begins with "frog" to folder frogfiles in the target repository, but append its name with the text "-up". For example, a file called froggy.tgz should be renamed froggy.tgz-up.
{
"files": [
{
"pattern": "(frog*)",
"target": "my-local-repo/frogfiles/{1}-up",
"recursive": "false"
}
]
}
The following two examples lead to the exact same outcome. The first one uses Using Placeholders, while the second one does not. Both examples download all files from the generic-local repository to be under the ֿֿmy/local/path/ local file-system path, while maintaining the original Artifactory folder hierarchy. Notice the different flat values in the two examples.
{
"files": [
{
"pattern": "generic-local/{*}",
"target": "my/local/path/{1}",
"flat": "true"
}
]
}
{
"files": [
{
"pattern": "generic-local/",
"target": "my/local/path/",
"flat": "false"
}
]
}
This example creates a release bundle v1 and applies "pathMapping" to the artifact paths after distributing the release bundle v1.
All occurrences of the "a1.in" file are fetched and mapped to the "froggy" repository at the edges.
Fetch all artifacts retrieved by the AQL query.
Create the release bundle v1 with the artifacts and apply the path mappings at the edges after distribution.
The "pathMapping" option is provided, allowing users to control the destination of the release bundle artifacts at the edges.
To learn more, visit the Create Release Bundle v1 Version documentation.
{
"files": [
{
"aql": {
"items.find": {
"repo": "my-local-repo",
"$and": [
{
"name": {
"$match": "a1.in"
}
},
{
"$or": [
{
"path": {
"$match": "."
}
},
{
"path": {
"$match": "*"
}
}
]
}
]
}
},
"pathMapping": {
"input": "my-local-repo/(.*)",
"output": "froggy/$1"
}
}
]
}
JSON schemas allow you to annotate and validate JSON files. The JFrog File Spec schema is available in the JSON Schema Store catalog and in the following link: https://github.com/jfrog/jfrog-cli/blob/v2/schema/filespec-schema.json.
The File Spec schema is automatically applied to the following file patterns:
**/filespecs/*.json
*filespec*.json
*.filespec
To apply the File Spec schema validation, install the JFrog VS-Code extension.
Alternatively, copy the following to your settings.json file:
settings.json
"json.schemas": [
{
"fileMatch": ["**/filespecs/*.json", "\*filespec\*.json", "*.filespec"],
"url": "https://raw.githubusercontent.com/jfrog/jfrog-cli/v2/schema/filespec-schema.json"
}
]
This page describes how to use JFrog CLI with JFrog Distribution.
Read more about JFrog CLI here.
When used with JFrog Distribution, JFrog CLI uses the following syntax:
$ jf ds command-name global-options command-options arguments
The following sections describe the commands available in the JFrog CLI for use with JFrog Distribution.
This commands creates and updates an unsigned Release Bundle on JFrog Distribution.
Note
This commands require version 2.0 or higher ofJFrog Distribution.
Command-name
release-bundle-create / release-bundle-update
Abbreviation
rbc / rbu
Command options:
--server-id
[Optional] Artifactory Server ID configured using the 'jf config' command.
--spec
[Optional] Path to a file spec. For more details, please refer to .
--spec-vars
[Optional] List of semicolon-separated(;) variables in the form of "key1=value1;key2=value2;..." to be replaced in the File Spec. In the File Spec, the variables should be used as follows: ${key1}.
--target-props
[Optional] The list of properties, in the form of key1=value1;key2=value2,..., to be added to the artifacts after distribution of the release bundle.
--target
[Optional] The target path for distributed artifacts on the edge node. If not specified, the artifacts will have the same path and name on the edge node, as on the source Artifactory server. For flexibility in specifying the distribution path, you can include in the form of {1}, {2} which are replaced by corresponding tokens in the pattern path that are enclosed in parenthesis.
--dry-run
[Default: false] Set to true to disable communication with JFrog Distribution.
--sign
[Default: false] If set to true, automatically signs the release bundle version.
--passphrase
[Optional] The passphrase for the signing key.
--desc
[Optional] Description of the release bundle.
--release-notes-path
[Optional] Path to a file describes the release notes for the release bundle version.
--release-notes-syntax
[Default: plain_text] The syntax for the release notes. Can be one of markdown, asciidoc, or plain_text.
--exclusions
[Optional] A list of semicolon-separated(;) exclude path patterns, to be excluded from the Release Bundle. Allows using wildcards.
--repo
[Optional] A repository name at source Artifactory to store release bundle artifacts in. If not provided, Artifactory will use the default one.
--insecure-tls
[Default: false] Set to true to skip TLS certificates verification.
--detailed-summary
[Default: false] Set to true to return the SHA256 value of the release bundle manifest.
Command arguments:
release bundle name
The name of the release bundle.
release bundle version
The release bundle version.
pattern
Specifies the source path in Artifactory, from which the artifacts should be bundled, in the following format: <repository name>/<repository path>. You can use wildcards to specify multiple artifacts. This argument should not be sent along with the --spec option.
Create a release bundle with name myApp and version 1.0.0. The release bundle will include the files defined in the File Spec specified by the --spec option.
jf ds rbc --spec=/path/to/rb-spec.json myApp 1.0.0
Create a release bundle with name myApp and version 1.0.0. The release bundle will include the files defined in the File Spec specified by the --spec option. GPG sign the release bundle after it is created.
jf ds rbc --spec=/path/to/rb-spec.json --sign myApp 1.0.0
Update the release bundle with name myApp and version 1.0.0. The release bundle will include the files defined in the File Spec specified by the --spec option.
jf ds rbu --spec=/path/to/rb-spec.json myApp 1.0.0
Update the release bundle with name myApp and version 1.0.0. The release bundle will include all the zip files inside the zip folder, located at the root of the my-local-repo repository.
jf ds rbu myApp 1.0.0 "my-local-repo/zips/*.zip"
Update the release bundle with name myApp and version 1.0.0. The release bundle will include all the zip files inside the zip folder, located at the root of the my-local-repo repository. The files will be distributed on the Edge Node to the target-zips folder, under the root of the my-target-repo repository.
jf ds rbu myApp 1.0.0 "my-local-repo/zips/*.zip" --target my-target-repo/target-zips/
This example uses placeholders. It creates the release bundle with name myApp and version 1.0.0. The release bundle will include all the zip files inside the zip folder, located at the root of the my-local-repo repository. The files will be distributed on the Edge Node to the target-zips folder, under the root of the my-target-repo repository. In addition, the distributed files will be renamed on the Edge Node, by adding -target to the name of each file.
jf ds rbc myApp 1.0.0 "my-local-repo/zips/(*).zip" --target "my-target-repo/target-zips/{1}-target.zip"
This example creates a release bundle and applies "pathMapping" to the artifact paths after distributing the release bundle.
All occurrences of the "a1.in" file are fetched and mapped to the "froggy" repository at the edges.
Fetch all artifacts retrieved by the AQL query.
Create the release bundle with the artifacts and apply the path mappings at the edges after distribution.
The "pathMapping" option is provided, allowing users to control the destination of the release bundle artifacts at the edges.
To learn more, visit the Create Release Bundle v1 Version documentation.
Note: The "target" option is designed to work for most use cases. The "pathMapping" option is intended for specific use cases, such as including a list.manifest.json file inside the release bundle.
In that scenario, the distribution server dynamically includes all the manifest.json and their layers and assigns the given path mapping, whereas "target" doesn't achieve this.
jf ds rbc --spec=/path/to/rb-spec.json myApp 1.0.0
Spec file content:
{
"files": [
{
"aql": {
"items.find": {
"repo": "my-local-repo",
"$and": [
{
"name": {
"$match": "a1.in"
}
},
{
"$or": [
{
"path": {
"$match": "."
}
},
{
"path": {
"$match": "*"
}
}
]
}
]
}
},
"pathMapping": {
"input": "my-local-repo/(.*)",
"output": "froggy/$1"
}
}
]
}
This command GPG signs an existing Release Bundle on JFrog Distribution.
Note
These commands require version 2.0 or higher ofJFrog Distribution.
Command-name
release-bundle-sign
Abbreviation
rbs
Command options:
--server-id
[Optional] Artifactory Server ID configured using the 'jf config' command.
--passphrase
[Optional] The passphrase for the signing key.
--repo
[Optional] A repository name at source Artifactory to store release bundle artifacts in. If not provided, Artifactory will use the default one.
--insecure-tls
[Default: false] Set to true to skip TLS certificates verification.
--detailed-summary
[Default: false] Set to true to return the SHA256 value of the release bundle manifest.
Command arguments:
release bundle name
The name of the release bundle.
release bundle version
The release bundle version.
GPG sign the release bundle with name myApp and version 1.0.0.
jf ds rbs --passphrase="<passphrase>" myApp 1.0.0
This command distributes a release bundle to the Edge Nodes.
Note
These commands require version 2.0 or higher ofJFrog Distribution.
Command-name
release-bundle-distribute
Abbreviation
rbd
Command options:
--server-id
[Optional] Artifactory Server ID configured using the 'jf config' command.
--sync
[Default: false] Set to true to enable sync distribution (the command execution will end when the distribution process ends).
--max-wait-minutes
[Default: 60] Max minutes to wait for sync distribution.
--create-repo
[Default: false] Set to true to create the repository on the edge if it does not exist.
--dry-run
[Default: false] Set to true to disable communication with JFrog Distribution.
--dist-rules
[Optional] Path to a file, which includes the Distribution Rules in a JSON format. Distribution Rules JSON structure { "distribution_rules": [ { "site_name": "DC-1", "city_name": "New-York", "country_codes": ["1"] }, { "site_name": "DC-2", "city_name": "Tel-Aviv", "country_codes": ["972"] } ] } The Distribution Rules format also supports wildcards. For example: { "distribution_rules": [ { "site_name": "", "city_name": "", "country_codes": ["*"] } ] }
--site
[Default: *] Wildcard filter for site name.
--city
[Default: *] Wildcard filter for site city name.
--country-codes
[Default: *] semicolon-separated(;) list of wildcard filters for site country codes.
--insecure-tls
[Default: false] Set to true to skip TLS certificates verification.
Command arguments:
release bundle name
The name of the release bundle.
release bundle version
The release bundle version.
Distribute the release bundle with name myApp and version 1.0.0. Use the distribution rules defined in the specified file.
jf ds rbd --dist-rules=/path/to/dist-rules.json myApp 1.0.0
This command deletes a Release Bundle from the Edge Nodes and optionally from Distribution as well.
Note
These commands require version 2.0 or higher of JFrog Distribution.
Command-name
release-bundle-delete
Abbreviation
rbdel
Command options:
--server-id
[Optional] Artifactory Server ID configured using the 'jf config' command.
--sync
[Default: false] Set to true to enable sync deletion (the command execution will end when the deletion process ends).
--max-wait-minutes
[Default: 60] Max minutes to wait for sync deletion.
--dry-run
[Default: false] Set to true to disable communication with JFrog Distribution.
--dist-rules
[Optional] Path to a file, which includes the distribution rules in a JSON format.
--site
[Default: *] Wildcard filter for site name.
--city
[Default: *] Wildcard filter for site city name.
--country-codes
[Default: *] semicolon-separated(;) list of wildcard filters for site country codes.
--delete-from-dist
[Default: false] Set to true to delete release bundle version in JFrog Distribution itself after deletion is complete in the specified Edge nodes.
--quiet
[Default: false] Set to true to skip the delete confirmation message.
--insecure-tls
[Default: false] Set to true to skip TLS certificates verification.
Command arguments:
release bundle name
The name of the release bundle.
release bundle version
The release bundle version.
Delete the release bundle with name myApp and version 1.0.0 from the Edge Nodes only, according to the definition in the distribution rules file.
jf ds rbdel --dist-rules=/path/to/dist-rules.json myApp 1.0.0
Delete the release bundle with name myApp and version 1.0.0 from the Edge Nodes, according to the definition in the distribution rules file. The release bundle will also be deleted from the Distribution service itself.
jf ds rbdel --delete-from-dist --dist-rules=/path/to/dist-rules.json myApp 1.0.0
JFrog provides you the ability to migrate from a self-hosted JFrog Platform installation to JFrog Cloud so that you can seamlessly transition into JFrog Cloud. You can use the JFrog CLI to transfer the Artifactory configuration settings and binaries to JFrog Cloud.
JFrog Cloud provides the same cutting-edge functionalities of a self-hosted JFrog Platform Deployment (JPD), without the overhead of managing the databases and systems. If you are an existing JFrog self-hosted customer, you might want to move to JFrog Cloud to ease operations. JFrog provides a solution that allows you to replicate your self-hosted JPD to a JFrog Cloud JPD painlessly.
The Artifactory Transfer solution currently transfers the config and data of JFrog Artifactory only. Other products such as JFrog Xray, and Distribution are currently not supported by this solution.
In this page, we refer to the source self-hosted instance as the source instance, and the target JFrog Cloud instance as the target instance.
Artifactory Version Support: The Artifactory Transfer solution is supported for any version of Artifactory 7.x and Artifactory version 6.23.21 and above. If your current Artifactory version is not of compatible version, please consider upgrading the Artifactory instance.
Supported OS Platforms: The transfer tool can help transfer the files and configuration from operating systems of all types, including Windows and Container environments.
The following limitations need to be kept in mind before you start the migration process
The Archive Search Enabled feature is not supported on JFrog Cloud.
Artifactory System Properties are not transferred and JFrog Cloud defaults are applied after the transfer.
User plugins are not supported on JFrog Cloud.
Artifact Cold Storage is not supported in JFrog Cloud.
Artifacts in remote repositories caches are not transferred.
Federated repositories are transferred without their federation members. After the transfer, you'll need to reconfigure the federation as described in the Federated Repositories documentation. Federated Repositories
Docker repositories with names that include dots or underscores aren't allowed in JFrog Cloud.
Artifact properties with a value longer than 2.4K characters are not supported in JFrog Cloud. Such properties are generally seen in Conan artifacts. The artifacts will be transferred without the properties in this case. A report with these artifacts will become available to you at the end of the transfer.
The files transfer process allows transferring files that were created or modified on the source instance after the process started. However:
Files that were deleted on the source instance after the process started, are not deleted on the target instance by the process.
The custom properties of those files are also updated on the target instance. However, if only the custom properties of those files were modified on the source, but not the files' content, the properties are not modified on the target instance by the process.
When transferring files in build-info repositories, JFrog CLI limits the total of working threads to 8. This is done to limit the load on the target instance while transferring build-info.
The transfer process includes two phases, that you must perform in the following order:
Configuration Transfer: Transfers the configuration entities like users, permissions, and repositories from the source instance to the target instance.
File Transfer: Transfers the files (binaries) stored in the source instance repositories to the target instance repositories.
Note
Files that are cached by remote repositories aren't transferred.
The content of Artifactory's Trash Can isn't transferred.
You can do both steps while the source instance is in use. No downtime on the source instance is required while the transfer is in progress.
If your source instance hosts files that are larger than 25 GB, they will be blocked during the transfer. To learn how to check whether large files are hosted by your source instance, and what to do in that case, read this section.
Ensure that you can log in to the UI of both the source and target instances with users that have admin permissions.
Ensure that the target instance license does not support fewer features than the source instance license.
Run the file transfer pre-checks as described here.
Ensure that all the remote repositories on the source Artifactory instance have network access to their destination URL once they are created in the target instance. Even if one remote or federated repository does not have access, the configuration transfer operation will be cancelled. You do have the option of excluding specific repositories from being transferred.
Ensure that all the replications configured on the source Artifactory instance have network access to their destination URL once they are created in the target instance.
Ensure that you have a user who can log in to MyJFrog.
Ensure that you can log in to the primary node of your source instance through a terminal.
Perform the following steps to transfer configuration and artifacts from the source to the target instance. You must run the steps in the exact sequence and do not run any of the commands in parallel.
By default, the target does not have the APIs required for the configuration transfer. Enabling the target instance for configuration transfer is done through MyJFrog. Once the configuration transfer is complete, you must disable the configuration transfer in MyJFrog as described in Step 4 below.
Warning
Enabling configuration transfer will trigger a shutdown of JFrog Xray, Distribution, Insights and Pipelines in the cloud and these services will therefore become unavailable. Once you disable the configuration transfer later on in the process, these services will be started up again.
Enabling configuration transfer will scale down JFrog Artifactory, which will reduce its available resources. Once you disable the configuration transfer later on in the process, Artifactory will be scaled up again.
Follow the below steps to enable the configuration transfer.
Log in to MyJFrog.
Click on Settings.
Under the Transfer Artifactory Configuration from Self-Hosted to Cloud section, click on the acknowledgment checkbox. You cannot enable configuration transfer until you select the checkbox.
If you have an Enterprise+ subscription with more than one Artifactory instance, select the target instance from the drop-down menu.
Toggle Enable Configuration Transfer to enable the transfer. The process may take a few minutes to complete.
The configuration transfer is now enabled, and you can continue with the transfer process.
To set up the source instance, you must install the data-transfer user plugin in the primary node of the source instance. This section guides you through the installation steps.
Install JFrog CLI on the primary node machine of the source instance as described here.
Configure the connection details of the source Artifactory instance with your admin credentials by running the following command from the terminal.
jf c add source-server
Ensure that the JFROG_HOME environment variable is set and holds the value of the JFrog installation directory. It usually points to the /opt/jfrog directory. In case the variable isn't set, set its value to point to the correct directory as described in the JFrog Product Directory Structure article.System Directories
If the source instance has internet access, follow this single step:
Download and install the data-transfer user plugin by running the following command from the terminal
jf rt transfer-plugin-install source-server
If the source instance has no internet access, follow these steps instead.
Download the following two files from a machine that has internet access: data-transfer.jar and dataTransfer.groovy.
Create a new directory on the primary node machine of the source instance and place the two files you downloaded inside this directory.
Install the data-transfer user plugin by running the following command from the terminal. Replace the <plugin files dir>
token with the full path to the directory which includes the plugin files you downloaded.
jf rt transfer-plugin-install source-server --dir "<plugin files dir>"
If the above is not an option, you may also load the transfer plugin manually into the on-premise plugins directory to continue with the transfer process.
Step-1: Download the dataTransfer JAR file from here (https://releases.jfrog.io/artifactory/jfrog-releases/data-transfer/[RELEASE]/lib/data-transfer.jar) and add it under $JFROG_HOME/artifactory/var/etc/artifactory/plugins/lib/. If the "lib" directory is not present, create one.
Step-2: Download the dataTransfer.groovy file from here (https://releases.jfrog.io/artifactory/jfrog-releases/data-transfer/[RELEASE]/dataTransfer.groovy) and add it under $JFROG_HOME/artifactory/var/etc/artifactory/plugins/.
Step-3: Reload the plugin using the following command. curl -u admin -X POST http://localhost:8082/artifactory/api/plugins/reload
If the plugin is loaded successfully, source instance is all set to proceed with the configuration transfer.
Warning
The following process will wipe out the entire configuration of the target instance, and replace it with the configuration of the source instance. This includes repositories and users.
Install JFrog CLI on the source instance machine as described here.
Configure the connection details of the source Artifactory instance with your admin credentials by running the following command from the terminal.
jf c add source-server
Configure the connection details of the target Artifactory server with your admin credentials by running the following command from the terminal.
jf c add target-server
Run the following command to verify that the target URLs of all the remote repositories are accessible from the target.
jf rt transfer-config source-server target-server --prechecks
If the command output shows that a target URL isn't accessible for any of the repositories, you'll need to make the URL accessible before proceeding to transfer the config. You can then rerun the command to ensure that the URLs are accessible.
If the command execution fails with an error indicating that the configuration import failed against the target server due to some existing data, before using the --force flag to override it, consider reviewing the configuration present in the cloud instance to ensure if it's safe to override. If you would like to preserve the existing configuration in cloud instance whilst transferring the additional data from on-premise, refer to the link here (https://docs.jfrog-applications.jfrog.io/jfrog-applications/jfrog-cli/cli-for-jfrog-cloud-transfer#transferring-projects-and-repositories-from-multiple-source-instances). This section describes a merge task instead of transfer, to sync the data between the instances.
NOTE: Users will not be transferred while executing merge. Only Repositories and Projects will be merged with the cloud instance.
Note
The following process will wipe out the entire configuration of the target instance, and replace it with the configuration of the source instance. This includes repositories and users.
Transfer the configuration from the source to the target by running the following command.
jf rt transfer-config source-server target-server
This command might take up to two minutes to run.
Note
By default, the command will not transfer the configuration if it finds that the target instance isn't empty. This can happen for example if you ran the transfer-config command before. If you'd like to force the command run anyway, and overwrite the existing configuration on the target, run the command with the
--force
option.In case you do not wish to transfer all repositories, you can use the
--include-repos
and--exclude-repos
command options. Run the following command to see the usage of these options.jf rt transfer-config -h
Troubleshooting
Did you encounter the following error when running the command?
Error: Creating temp export directory: /export/jfrog-cli/tmp/jfrog.cli.temp.-1728658888-1442707797/20241011.110128.tmp
500 : Failed to create backup dir: /export/jfrog-cli/tmp/jfrog.cli.temp.-1728658888-1442707797/20241011.110128.tmp
This error commonly occurs on Red Hat Enterprise Linux (RHEL) and CentOS platforms. The issue arises because the CLI process expects the temporary directory (/tmp) to be owned by the artifactory user, even when the process is run by root. To resolve this issue, follow these steps:
Create a new directory named tmp in your home directory:
mkdir ~/tmp
Assign ownership of the new tmp directory to the artifactory user and group:
sudo chown -R artifactory:artifactory ~/tmp
Inform JFrog CLI to use the new temporary directory by setting the JFROG_CLI_TEMP_DIR environment variable:
export JFROG_CLI_TEMP_DIR=~/tmp
Execute the transfer-config command again
View the command output in the terminal to verify that there are no errors. The command output is divided into the following four phases:
========== Phase 1/4 - Preparations ==========
========== Phase 2/4 - Export configuration from the source Artifactory ==========
========== Phase 3/4 - Download and modify configuration ==========
========== Phase 4/4 - Import configuration to the target Artifactory ==========
View the log to verify there are no errors.
The target instance should now be accessible with the admin credentials of the source instance. Log into the target instance UI. The target instance must have the same repositories as the source.
Once the configuration transfer is successful, you need to disable the configuration transfer on the target instance. This is important both for security reasons and the target server is set to be low on resources while configuration transfer is enabled.
Login to MyJFrog
Under the Actions menu, choose Enable Configuration Transfer.
Toggle Enable Configuration Transfer to off to disable configuration transfer.
Disabling the configuration transfer might take some time.
Before initiating the file transfer process, we highly recommend running pre-checks, to identify issues that can affect the transfer. You trigger the pre-checks by running a JFrog CLI command on your terminal. The pre-checks will verify the following:
There's network connectivity between the source and target instances.
The source instance does not include artifacts with properties with values longer than 2.4K characters. This is important, because values longer than 2.4K characters are not supported in JFrog Cloud, and those properties are skipped during the transfer process.
To run the pre-checks, follow these steps:
Install JFrog CLI on any machine that has access to both the source and the target JFrog instances. To do this, follow the steps described here.
Run the following command:
jf rt transfer-files source-server target-server --prechecks
Initiating File Transfer: Run the following command to start pushing the files from all the repositories in the source instance to the target instance.
```sh
jf rt transfer-files source-server target-server
```
This command may take a few days to push all the files, depending on your system size and your network speed. While the command is running, It displays the transfer progress visually inside the terminal.
If you're running the command in the background, you use the following command to view the transfer progress.
jf rt transfer-files --status
Note
In case you do not wish to transfer the files from all repositories, or wish to run the transfer in phases, you can use the
--include-repos
and--exclude-repos
command options. Run the following command to see the usage of these options.
jf rt transfer-files -h
If the traffic between the source and target instance needs to be routed through an HTTPS proxy, refer to this section.
You can stop the transfer process by hitting on CTRL+C if the process is running in the foreground, or by running the following command if you're running the process in the background.
jf rt transfer-files --stop
The process will continue from the point it stopped when you re-run the command.
While the file transfer is running, monitor the load on your source instance, and if needed, reduce the transfer speed or increase it for better performance. For more information, see the Controlling the file transfer speed.
A path to an errors summary file will be printed at the end of the run, referring to a generated CSV file. Each line on the summary CSV represents an error log of a file that failed to be transferred. On subsequent executions of the jf rt transfer-files
command, JFrog CLI will attempt to transfer these files again.
Once thejf rt transfer-files
command finishes transferring the files, you can run it again to transfer files that were created or modified during the transfer. You can run the command as many times as needed. Subsequent executions of the command will also attempt to transfer files that failed to be transferred during previous executions of the command.
Note
Read more about how the transfer files works in this section.
You have the option to sync the configuration between the source and target after the file transfer process is complete. You may want to so this if new config entities, such as projects, repositories, or users were created or modified on the source, while the files transfer process has been running. To do this, simply repeat steps 1-3 above.
Transferring files larger than 25GB: By default, files that are larger than 25 GB will be blocked by the JFrog Cloud infrastructure during the file transfer. To check whether your source Artifactory instance hosts files larger than that size, do the following. Run the following curl command from your terminal, after replacing the <source instance URL>
, <username>
and <password>
tokens with your source instance details. The command execution may take a few minutes, depending on the number of files hosted by Artifactory.
curl -X POST <source instance URL>/artifactory/api/search/aql -H "Content-Type: text/plain" -d 'items.find({"name" : {"$match":"*"}}).include("size","name","repo").sort({"$desc" : ["size"]}).limit(1)' -u "<USERNAME>:<PASSWORD>"
You should get a result that looks like the following.
{
"results":[
{
"size":132359021
}
],
"range":{
"start_pos":0,
"end_pos":1,
"total":1,
"limit":1
}
}
The value of size represents the largest file size hosted by your source Artifactory instance.
If the size value you received is larger than 25000000000, please avoid initiating the files transfer before contacting JFrog Support, to check whether this size limit can be increased for you. You can contact Support by sending an email to [email protected]
Routing the traffic from the source to the target through an HTTPS proxy: The jf rt transfer-files
command pushes the files directly from the source to the target instance over the network. In case the traffic from the source instance needs to be routed through an HTTPS proxy, follow these steps.
a. Define the proxy details in the source instance UI as described in the Managing ProxiesManaging Proxies documentation. b. When running the jf rt transfer-files
command, add the --proxy-key
option to the command, with Proxy Key you configured in the UI as the option value. For example, if the Proxy Key you configured is my-proxy-key, run the command as follows:
jf rt transfer-files my-source my-target --proxy-key my-proxy-key
The jf rt transfer-config command transfers all the config entities (users, groups, projects, repositories, and more) from the source to the target instance. While doing so, the existing configuration on the target is deleted and replaced with the new configuration from the source. If you'd like to transfer the projects and repositories from multiple source instances to a single target instance, while preserving the existing configuration on the target, follow the below steps.
Note
These steps trigger the transfer of the projects and repositories only. Other configuration entities like users are currently not supported.
Ensure that you have admin access tokens for both the source and target instances. You'll have to use an admin access token and not an Admin username and password.
Install JFrog CLI on any machine that has access to both the source and the target instances using the steps described here. Make sure to use the admin access tokens and not an admin username and password when configuring the connection details of the source and the target.
Run the following command to merge all the projects and repositories from the source to the target instance.
jf rt transfer-config-merge source-server target-server
Note
In case you do not wish to transfer the files from all projects or the repositories, or wish to run the transfer in phases, you can use the
--include-projects, --exclude-projects, --include-repos
and--exclude-repos
command options. Run the following command to see the usage of these options.
jf rt transfer-config-merge -h
The jf rt transfer-files
command pushes the files from the source instance to the target instance as follows:
The files are pushed for each repository, one by one in sequence.
For each repository, the process includes the following three phases:
Phase 1 pushes all the files in the repository to the target.
Phase 2 pushes files that have been created or modified after phase 1 started running (diffs).
Phase 3 attempts to push files that failed to be transferred in earlier phases (Phase 1 or Phase 2) or in previous executions of the command.
If Phase 1 finished running for a specific repository, and you run the jf rt transfer-files
command again, only Phase 2 and Phase 3 will be triggered. You can run the jf rt transfer-files
as many times as needed, till you are ready to move your traffic to the target instance permanently. In any subsequent run of the command, Phase 2 will transfer the newly created and modified files, and Phase 3 will retry transferring files that failed to be transferred in previous phases and also in previous runs of the command.
To achieve this, JFrog CLI stores the current state of the file transfer process in a directory named transfer
under the JFrog CLI home directory. You can usually find this directory at this location ~/.jfrog/transfer
.
JFrog CLI uses the state stored in this directory to avoid repeating transfer actions performed in previous executions of the command. For example, once Phase 1 is completed for a specific repository, subsequent executions of the command will skip Phase 1 and run Phase 2 and Phase 3 only.
In case you'd like to ignore the stored state, and restart the file transfer from scratch, you can add the --ignore-state
option to the jf rt transfer-files
command.
Unlike the transfer-config command, which should be run from the primary node machines of Artifactory, it is recommended to run the transfer-files command from a machine that has network access to the source Artifactory URL. This allows the spreading the transfer load on all the Artifactory cluster nodes. This machine should also have network access to the target Artifactory URL.
Follow these steps to install JFrog CLI on that machine.
Install JFrog CLI by using one of the JFrog CLI Installers. For example:
curl -fL https://install-cli.jfrog.io | sh
If your source instance is accessible only through an HTTP/HTTPS proxy, set the proxy environment variable as described here.
Configure the connection details of the source Artifactory instance with your admin credentials. Run the following command and follow the instructions.
jf c add source-server
Configure the connection details of the target Artifactory instance.
jf c add target-server
Install JFrog CLI on your source instance by using one of the JFrog CLI Installers. For example:
curl -fL https://install-cli.jfrog.io | sh
Note
If the source instance is running as a docker container, and you're not able to install JFrog CLI while inside the container, follow these steps.
Connect to the host machine through the terminal.
Download the JFrog CLI executable into the correct directory by running this command.
curl -fL https://getcli.jfrog.io/v2-jf | sh
Copy the JFrog CLI executable you've just downloaded to the container, by running the following docker command. Make sure to replace<the container name>it
with the name of the container.
docker cp jf <the container name>:/usr/bin/jf
Connect to the container and run the following command to ensure JFrog CLI is installed
jf -v
The jf rt transfer-files
command pushes the binaries from the source instance to the target instance. This transfer can take days, depending on the size of the total data transferred, the network bandwidth between the source and the target instance, and additional factors.
Since the process is expected to run while the source instance is still being used, monitor the instance to ensure that the transfer does not add too much load to it. Also, you might decide to increase the load for faster transfer while you monitor the transfer. This section describes how to control the file transfer speed.
By default, the jf rt transfer-files
command uses 8 working threads to push files from the source instance to the target instance. Reducing this value will cause slower transfer speed and lower load on the source instance, and increasing it will do the opposite. We therefore recommend increasing it gradually. This value can be changed while the jf rt transfer-files
command is running. There's no need to stop the process to change the total of working threads. The new value set will be cached by JFrog CLI and also used for subsequent runs from the same machine. To set the value, simply run the following interactive command from a new terminal window on the same machine that runs the jf rt transfer-files
command.
jf rt transfer-settings
When your self-hosted Artifactory hosts hundreds of terabytes of binaries, you may consult with your JFrog account manager about the option of reducing the file transfer time by manually copying the entire filestore to the JFrog Cloud storage. This reduces the transfer time because the binaries' content does not need to be transferred over the network.
The jf rt transfer-files
command transfers the metadata of the binaries to the database (file paths, file names, properties, and statistics). The command also transfers the binaries that have been created and modified after you copy the filestore.
To run the file transfer after you copy the filestore, add the --filestore
command option to the jf rt transfer-files
command.
To help reduce the time it takes for Phase 2 to run, you may configure Event-Based Push Replication for some or all of the local repositories on the source instance. With Replication configured, when files are created or updated on the source repository, they are immediately replicated to the corresponding repository on the target instance. Repository Replication
The replication can be configured at any time. Before, during, or after the file transfer process.
Why is the total file count on my source and target instances different after the files transfer finishes?
It is expected to see sometimes significant differences between the files count on the source and target instances after the transfer ends. These differences can be caused by many reasons, and in most cases are not an indication of an issue. For example, Artifactory may include file cleanup policies that are triggered by the file deployment. This can cause some files to be cleaned up from the target repository after they are transferred.
How can I validate that all files were transferred from the source to the target instance?
There's actually no need to validate that all files were transferred at the end of the transfer process. JFrog CLI performs this validation for you while the process is running. It does that as follows.
JFrog CLI traverses the repositories on the source instance and pushes all files to the target instance.
If a file fails to reach the target instance or isn't deployed there successfully, the source instance logs this error with the file details.
At the end of the transfer process, JFrog CLI provides you with a summary of all files that failed to be pushed.
The failures are also logged inside the transfer
directory under the JFrog CLI home directory. This directory is usually located at ~/.jfrog/transfer
. Subsequent runs of the jf rt transfer-files
command use this information for attempting another transfer of the files.
Does JFrog CLI validate the integrity of files, after they are transferred to the target instance?
Yes. The source Artifactory instance stores a checksum for every file it hosts. When files are transferred to the target instance, they are transferred with the checksums as HTTP headers. The target instance calculates the checksum for each file it receives and then compares it to the received checksum. If the checksums don't match, the target reports this to the source, which will attempt to transfer the file again at a later stage of the process.
Can I stop the jf rt transfer-files command and then start it again? Would that cause any issues?
You can stop the command at any time by hitting CTRL+C and then run it again. JFrog CLI stores the state of the transfer process in the "transfer" directory under the JFrog CLI home directory. This directory is usually located at ~/.jfrog/transfer
. Subsequent executions of the command use the data stored in that directory to try and avoid transferring files that have already been transferred in previous command executions.
JFrog CLI integrates with any development ecosystem allowing you to collect build-info and then publish it to Artifactory. By publishing build-info to Artifactory, JFrog CLI empowers Artifactory to provide visibility into artifacts deployed, dependencies used and extensive information on the build environment to allow fully traceable builds. Read more about build-info and build integration with Artifactory .
Many of JFrog CLI's commands accept two optional command options: --build-name and --build-number. When these options are added, JFrog CLI collects and records the build info locally for these commands. When running multiple commands using the same build and build number, JFrog CLI aggregates the collected build info into one build. The recorded build-info can be later published to Artifactory using the command.
Build-info is collected by adding the --build-name
and --build-number
options to different CLI commands. The CLI commands can be run several times and cumulatively collect build-info for the specified build name and number until it is published to Artifactory. For example, running the jf rt download
command several times with the same build name and number will accumulate each downloaded file in the corresponding build-info.
Dependencies are collected by adding the --build-name
and --build-number
options to the jf rt download
command.
For example, the following command downloads the cool-froggy.zip
file found in repository my-local-repo
, but it also specifies this file as a dependency in build my-build-name
with build number 18:
Build artifacts are collected by adding the --build-name
and --build-number
options to the jf rt upload
command.
For example, the following command specifies that file froggy.tgz
uploaded to repository my-local-repo
is a build artifact of build my-build-name
with build number 18:
This command is used to collect environment variables and attach them to a build.
Environment variables are collected using the build-collect-env
(bce
) command.
jf rt bce <build name> <build number>
The following table lists the command arguments and flags:
Example 1
The following command collects all currently known environment variables, and attaches them to the build-info for build my-build-name
with build number 18:
Example 2
Collect environment variables for build name: frogger-build and build number: 17
The build-add-git
(bag) command collects the Git revision and URL from the local .git directory and adds it to the build-info. It can also collect the list of tracked project issues (for example, issues stored in JIRA or other bug tracking systems) and add them to the build-info. The issues are collected by reading the git commit messages from the local git log. Each commit message is matched against a pre-configured regular expression, which retrieves the issue ID and issue summary. The information required for collecting the issues is retrieved from a yaml configuration file provided to the command.
jf rt bag [command options] <build name> <build number> [Path To .git]
The following table lists the command arguments and flags:
This is the configuration file structure.
The download command, as well as other commands which download dependencies from Artifactory accept the --build-name and --build-number command options. Adding these options records the downloaded files as build dependencies. In some cases however, it is necessary to add a file, which has been downloaded by another tool, to a build. Use the build-add-dependencies command to this.
By default, the command collects the files from the local file system. If you'd like the files to be collected from Artifactory however, add the --from-rt option to the command.
jf rt bad [command options] <build name> <build number> <pattern>
jf rt bad --spec=<File Spec path> [command options] <build name> <build number>
Example 1
Add all files located under the path/to/build/dependencies/dir directory as dependencies of a build. The build name is my-build-name and the build number is 7. The build-info is only updated locally. To publish the build-info to Artifactory use the jf rt build-publish command.
Example 2
Add all files located in the m-local-repo Artifactory repository, under the dependencies folder, as dependencies of a build. The build name is my-build-name and the build number is 7. The build-info is only updated locally. To publish the build-info to Artifactory use the jf rt build-publish command.
Example 3
Add all files located under the path/to/build/dependencies/dir directory as dependencies of a build. The build name is my-build-name, the build number is 7 and module is m1. The build-info is only updated locally. To publish the build-info to Artifactory use the jf rt build-publish command.
This command is used to publish build info to Artifactory. To publish the accumulated build-info for a build to Artifactory, use the build-publish command. For example, the following command publishes all the build-info collected for build my-build-name with build number 18:
jf rt bp [command options] <build name> <build number>
Publishes to Artifactory all the build-info collected for build my-build-name with build number 18
The build-info, which is collected and published to Artifactory by the jf rt build-publish command, can include multiple modules. Each module in the build-info represents a package, which is the result of a single build step, or in other words, a JFrog CLI command execution. For example, the following command adds a module named m1 to a build named my-build with 1 as the build number:
The following command, adds a second module, named m2 to the same build:
You now publish the generated build-info to Artifactory using the following command:
Now that you have your build-info published to Artifactory, you can perform actions on the entire build. For example, you can download, copy, move or delete all or some of the artifacts of a build. Here's how you do this.
In some cases though, your build is composed of multiple build steps, which are running on multiple different machines or spread across different time periods. How do you aggregate those build steps, or in other words, aggregate those command executions, into one build-info?
The way to do this, is to create a separate build-info for every section of the build, and publish it independently to Artifactory. Once all the build-info instances are published, you can create a new build-info, which references all the previously published build-info instances. The new build-info can be viewed as a "master" build-info, which references other build-info instances.
So the next question is - how can this reference between the two build-instances be created?
The way to do this is by using the build-append command. Running this command on an unpublished build-info, adds a reference to a different build-info, which has already been published to Artifactory. This reference is represented by a new module in the new build-info. The ID of this module will have the following format: **<referenced build name>/<referenced build number>.
Now, when downloading the artifacts of the "master" build, you'll actually be downloading the artifacts of all of its referenced builds. The examples below demonstrates this,
jf rt ba <build name> <build number> <build name to append> <build number to append>
Requirements
Artifactory version 7.25.4 and above.
This script illustrates the process of creating two build-info instances, publishing both to Artifactory, and subsequently generating a third build-info that consolidates the published instances before publishing it to Artifactory.
This command is used to in Artifactory.
jf rt bpr [command options] <build name> <build number> <target repository>
This example involves moving the artifacts associated with the published build-info, identified by the build name 'my-build-name' and build number '18', from their existing Artifactory repository to a new Artifactory repository called 'target-repository'.
Build-info is accumulated by the CLI according to the commands you apply until you publish the build-info to Artifactory. If, for any reason, you wish to "reset" the build-info and cleanup (i.e. delete) any information accumulated so far, you can use the build-clean
(bc
) command.
jf rt bc <build name> <build number>
The following table lists the command arguments and flags:
The following command cleans up any build-info collected for build my-build-name
with build number 18:
This command is used to discard builds previously published to Artifactory using the command.
jf rt bdi [command options] <build name>
The following table lists the command arguments and flags:
Discard the oldest build numbers of build my-build-name from Artifactory, leaving only the 10 most recent builds.
Discard the oldest build numbers of build my-build-name from Artifactory, leaving only builds published during the last 7 days.
Discard the oldest build numbers of build my-build-name from Artifactory, leaving only builds published during the last 7 days. b20 and b21 will not be discarded.
jf rt dl my-local-repo/cool-froggy.zip --build-name=my-build-name --build-number=18
jf rt u froggy.tgz my-local-repo --build-name=my-build-name --build-number=18
Command name
rt build-collect-env
Abbreviation
rt bce
Command arguments:
The command accepts two arguments.
Build name
Build name.
Build number
Build number.
Command options:
--project
[Optional] JFrog project key.
jf rt bce my-build-name 18
jf rt bce frogger-build 17
Command name
rt build-add-git
Abbreviation
rt bag
Command arguments:
The command accepts three arguments.
Build name
Build name.
Build number
Build number.
.git path
Optional - Path to a directory containing the .git directory. If not specific, the .git directory is assumed to be in the current directory or in one of the parent directories.
Command options:
--config
[Optional] Path to a yaml configuration file, used for collecting tracked project issues and adding them to the build-info.
--server-id
[Optional]
Server ID configured using the 'jf config' command. This is the server to which the build-info will be later published, using the jf rt build-publish
command. This option, if provided, overrides the serverID value in this command's yaml configuration. If both values are not provided, the default server, configured by the 'jf config' command, is used.
--project
[Optional] JFrog project key.
Property name
Description
Version
The schema version is intended for internal use. Do not change!
serverID
Artifactory server ID configured by the 'jf config' command. The command uses this server for fetching details about previous published builds. The --server-id command option, if provided, overrides the serverID value. If both the serverID property and the --server-id command options are not provided, the default server, configured by the 'jf config' command is used.
trackerName
The name (type) of the issue tracking system. For example, JIRA. This property can take any value.
regexp
A regular expression used for matching the git commit messages. The expression should include two capturing groups - for the issue key (ID) and the issue summary. In the example above, the regular expression matches the commit messages as displayed in the following example: HAP-1007 - This is a sample issue
keyGroupIndex
The capturing group index in the regular expression used for retrieving the issue key. In the example above, setting the index to "1" retrieves HAP-1007 from this commit message: HAP-1007 - This is a sample issue
summaryGroupIndex
The capturing group index in the regular expression for retrieving the issue summary. In the example above, setting the index to "2" retrieves the sample issue from this commit message: HAP-1007 - This is a sample issue
trackerUrl
The issue tracking URL. This value is used for constructing a direct link to the issues in the Artifactory build UI.
aggregate
Set to true, if you wish all builds to include issues from previous builds.
aggregationStatus
If aggregate is set to true, this property indicates how far in time should the issues be aggregated. In the above example, issues will be aggregated from previous builds, until a build with a RELEASE status is found. Build statuses are set when a build is promoted using the jf rt build-promote command.
jf rt bag frogger-build 17 checkout-dir
version: 1
issues:
# The serverID yaml property is optional. The --server-id command option, if provided, overrides the serverID value.
# If both the serverID property and the --server-id command options are not provided,
# the default server, configured by the "jfrog config add" command is used.
serverID: my-artifactory-server
trackerName: JIRA
regexp: (.+-\[0-9\]+)\\s-\\s(.+)
keyGroupIndex: 1
summaryGroupIndex: 2
trackerUrl: https://my-jira.com/issues
aggregate: true
aggregationStatus: RELEASED
Command name
rt build-add-dependencies
Abbreviation
rt bad
Command arguments:
The command takes three arguments.
Build name
The build name to add the dependencies to
Build number
The build number to add the dependencies to
Pattern
Specifies the local file system path to dependencies which should be added to the build info. You can specify multiple dependencies by using wildcards or a regular expression as designated by the --regexp command option. If you have specified that you are using regular expressions, then the first one used in the argument must be enclosed in parenthesis.
Command options:
When using the * or ; characters in the command options or arguments, make sure to wrap the whole options or arguments string in quotes (") to make sure the * or ; characters are not interpreted as literals.
--from-rt
[Default: false] Set to true to search the files in Artifactory, rather than on the local file system. The --regexp option is not supported when --from-rt is set to true.
--server-id
[Optional] Server ID configured using the 'jf config' command.
--spec
[Optional] Path to a File Spec.
--spec-vars
[Optional] List of semicolon-separated(;) variables in the form of "key1=value1;key2=value2;..." to be replaced in the File Spec. In the File Spec, the variables should be used as follows: ${key1}.
--recursive
[Default: true] When false, artifacts inside sub-folders in Artifactory will not be affected.
--regexp
[Optional: false] [Default: false] Set to true to use a regular expression instead of wildcards expression to collect files to be added to the build info.This option is not supported when --from-rt is set to true.
--dry-run
[Default: false] Set to true to only get a summery of the dependencies that will be added to the build info.
--module
[Optional] Optional module name in the build-info for adding the dependency.
--exclusions
A list of semicolon-separated(;) exclude patterns. Allows using wildcards or a regular expression according to the value of the regexp
option.
jf rt bad my-build-name 7 "path/to/build/dependencies/dir/"
jf rt bad my-build-name 7 "my-local-repo/dependencies/" --from-rt
jf rt bad my-build-name 7 "path/to/build/dependencies/dir/" --module m1
Command name
rt build-publish
Abbreviation
rt bp
Command arguments:
The command accepts two arguments.
Build name
Build name to be published.
Build number
Build number to be published.
Command options:
--server-id
[Optional] Server ID configured using the 'jf config' command. If not specified, the default configured Artifactory server is used.
--project
[Optional] JFrog project key.
--build-url
[Optional] Can be used for setting the CI server build URL in the build-info.
--env-include
[Default: *] List of semicolon-separated(;) patterns in the form of "value1;value2;..." Only environment variables that match those patterns will be included in the build info.
--env-exclude
[Default: password;secret;key] List of semicolon-separated(;) case insensitive patterns in the form of "value1;value2;..." environment variables match those patterns will be excluded.
--dry-run
[Default: false] Set to true to disable communication with Artifactory.
--insecure-tls
[Default: false] Set to true to skip TLS certificates verification.
--overwrite
[Default: false] Overwrites all existing occurrences of build infos with the provided name and number. Build artifacts will not be deleted.
jf rt bp my-build-name 18
jf rt upload "a/*.zip" generic-local --build-name my-build --build-number 1 --module m1
jf rt upload "b/*.zip" generic-local --build-name my-build --build-number 1 --module m2
jf rt build-publish my-build 1
jf rt download "*" --build my-build/1
Command name
rt build-append
Abbreviation
rt ba
Command arguments:
The command accepts four arguments.
Build name
The current (not yet published) build name.
Build number
The current (not yet published) build number,
build name to append
The published build name to append to the current build
build number to append
The published build number to append to the current build
Command options:
This command has no options.
# Create and publish build a/1
jf rt upload "a/*.zip" generic-local --build-name a --build-number 1
jf rt build-publish a 1
# Create and publish build b/1
jf rt upload "b/*.zip" generic-local --build-name b --build-number 1
jf rt build-publish b 1
# Append builds a/1 and b/1 to build aggregating-build/10
jf rt build-append aggregating-build 10 a 1
jf rt build-append aggregating-build 10 b 1
# Publish build aggregating-build/10
jf rt build-publish aggregating-build 10
# Download the artifacts of aggregating-build/10, which is the same as downloading the of a/1 and b/1
jf rt download --build aggregating-build/10
Command name
rt build-promote
Abbreviation
rt bpr
Command arguments:
The command accepts three arguments.
Build name
Build name to be promoted.
Build number
Build number to be promoted.
Target repository
Build promotion target repository.
Command options:
--server-id
[Optional] Server ID configured using the 'jf config' command. If not specified, the default configured Artifactory server is used.
--project
[Optional] JFrog project key.
--status
[Optional] Build promotion status.
--comment
[Optional] Build promotion comment.
--source-repo
[Optional] Build promotion source repository.
--include-dependencies
[Default: false] If set to true, the build dependencies are also promoted.
--copy
[Default: false] If set true, the build artifacts and dependencies are copied to the target repository, otherwise they are moved.
--props
[Optional] List of semicolon-separated(;) properties in the form of "key1=value1;key2=value2,...". to attach to the build artifacts.
--dry-run
[Default: false] If true, promotion is only simulated. The build is not promoted.
--insecure-tls
[Default: false] Set to true to skip TLS certificates verification.
jf rt bpr my-build-name 18 target-repository
Command name
rt build-clean
Abbreviation
rt bc
Command arguments:
The command accepts two arguments.
Build name
Build name.
Build number
Build number.
Command options:
The command has no options.
jf rt bc my-build-name 18
Command name
rt build-discard
Abbreviation
rt bdi
Command arguments:
The command accepts one argument.
Build name
Build name.
Command options:
--server-id
[Optional] Server ID configured using the 'jf config' command. If not specified, the default configured Artifactory server is used.
--max-days
[Optional] The maximum number of days to keep builds in Artifactory.
--max-builds
[Optional] The maximum number of builds to store in Artifactory.
--exclude-builds
[Optional] List of comma-separated(,) build numbers in the form of "build1,build2,...", that should not be removed from Artifactory.
--delete-artifacts
[Default: false] If set to true, automatically removes build artifacts stored in Artifactory.
--async
[Default: false] If set to true, build discard will run asynchronously and will not wait for response.
jf rt bdi my-build-name --max-builds= 10
jf rt bdi my-build-name --max-days=7
jf rt bdi my-build-name --max-days 7 --exclude-builds "b20,b21"
This page describes how to use JFrog CLI with Release Lifecycle Management. It includes the following sections:
Note
Release Lifecycle Management is available since Artifactory 7.63.2.
When used with JFrog Release Lifecycle Management, JFrog CLI uses the following syntax:
$ jf command-name global-options command-options arguments
The create command allows creating a Release Bundle v2 using file specs. The file spec may be based on one or more of the following sources:
Published build info:
{
"files": [
{
"build": "<build-name>/<build-number>",
"includeDeps": "[true/false]",
"project": "<project-key>"
},
...
]
}
<build-number>
is optional; the latest build will be used if empty. includeDeps
is optional, false by default. project
is optional; the default project will be used if empty.
Existing Release Bundles:
{
"files": [
{
"bundle": "<bundle-name>/<bundle-version>",
"project": "<project-key>"
},
...
]
}
project
is optional; the default project will be used if empty.
A pattern of artifacts in Artifactory:
{
"files": [
{
"pattern": "repo/path/*",
"exclusions": ["excluded",...],
"props": "key1=value1;key2=value2;key3=value3",
"excludeArtifacts": "key1=value1;key2=value2;key3=value3",
"recursive": "[true/false]"
},
...
]
}
Only pattern
is mandatory. recursive
is true by default. The path can include local, Federated, and remote repositories.
AQL query:
{
"files": [
{
"aql": {
"items.find": {
"repo": "<repo>",
"path": "<path>",
"name": "<file>"
}
}
}
]
}
Only a single AQL query may be provided.
Specified package: The Release Bundle can contain packages of multiple types (for example, Docker, Maven, PyPI, and so on).
{
"files": [
{
"package": "catalina",
"version":"1.0.0",
"type": "maven",
"repoKey": "catalina-dev-maven-local"
},
]
}
A Release Bundle created from multiple sources: The Release Bundle can include any number of builds, artifacts (using a pattern), packages, and Release Bundles. However, it can include only one AQL query.
{
"files": [
{
"build": "Commons-Build/1.0.0",
"includeDeps":"true",
"project": "default"
},
{
"bundle": "rb1/1.0.0",
"project": "default"
},
{
"bundle": "rb4/1.0.0",
"project": "default"
},
{
"pattern": "catalina-dev-maven-local/*.jar"
},
{
"package": "commons",
"version":"1.0.1",
"type": "maven",
"repoKey": "commons-dev-maven-local"
},
{
"package": "catalina",
"version":"1.0.0",
"type": "maven",
"repoKey": "catalina-dev-maven-local"
},
{
"aql": {
"items.find": {
"repo":{"$eq":"catalina-dev-maven-local"},
"$and":[
{"name": {"$match":"*1.0.0.pom"}}
]
}
}
}
]
}
Command-name
release-bundle-create
Abbreviation
rbc
Command arguments:
release bundle name
Name of the newly created Release Bundle.
release bundle version
Version of the newly created Release Bundle.
Command options:
--project
[Optional] Project key associated with the created Release Bundle version.
--server-id
[Optional] Platform Server ID configured using the 'jf config' command.
--signing-key
[Optional]
The GPG/RSA key-pair name defined in Artifactory. The signing-key
can also be configured as an environment variable. If no key is specified, Artifactory uses a default key.
--spec
[Optional]
Path to a File Spec. If you do not define the spec, you must include the build-name
and build-number
as environment variables, flags, or a combination of both (flags override environment variables).
--spec-vars
[Optional] List of semicolon-separated(;) variables in the form of "key1=value1;key2=value2;..." (wrapped by quotes) to be replaced in the File Spec. In the File Spec, the variables should be used as follows: ${key1}.
--build-name
[Optional] The name of the build from which to create the Release Bundle.
--build-number
[Optional] The number of the build from which to create the Release Bundle.
--sync
[Default: true] Set to false to run asynchronously.
--source-type-release-bundles
[Optional] One or more Release Bundles to include in the new Release Bundle in the form of "name=[rb-name], version=[rb-version];..." (wrapped by quotes).
Use a semicolon [;] to separate multiple entries.
Note: The --spec
flag cannot be used in conjunction with the --source-type-release-bundles
flag.
--source-type-builds
[Optional] One or more builds to include in the new Release Bundle in the form of "name=[build-name], id=[build-id], include-deps=[true/false];..." (wrapped by quotes).
Use a semicolon [;] to separate multiple entries.
Note: The --spec
flag cannot be used in conjunction with the --source-type-builds
flag.
Create a Release Bundle using file spec variables.
jf rbc --spec=/path/to/spec.json --spec-vars="key1=value1" --signing-key=myKeyPair myApp 1.0.0
Create a Release Bundle synchronously, in project "project0".
jf rbc --spec=/path/to/spec.json --signing-key=myKeyPair --sync=true --project=project0 myApp 1.0.0
Create a Release Bundle from a single build using the build name and build number variables.
jf rbc --build-name=Common-builds --build-number=1.0.0 myApp 1.0.0
Example 4
Create a Release Bundle from multiple builds.
jf rbc rb3 1.0.0 --source-type-builds "name=Commons-Build, id=1.0.0, include-deps=true; name=Commons-Build, id=1.0.1"
Example 5
Create a Release Bundle from multiple existing Release Bundles.
jf rbc rb3 1.0.0 --project catalina --source-type-release-bundles "name=rb1, version=1.0.0; name=rb2, version=1.0.0"
Example 6
Create a Release Bundle from existing Release Bundles and builds.
jf rbc rb3 1.0.0 --source-type-builds
"name=Commons-Build, id=1.0.0, include-deps=true; name=Commons-Build, id=1.0.1"
--source-type-release-bundles
"name=rb1, version=1.0.0; name=rb2, version=1.0.0"
This command allows promoting a Release Bundle to a target environment.
Command-name
release-bundle-promote
Abbreviation
rbp
Command arguments:
release bundle name
Name of the Release Bundle to promote.
release bundle version
Version of the Release Bundle to promote.
environment
Name of the target environment for the promotion.
Command options:
--input-repos
[Optional] A list of semicolon-separated(;) repositories to include in the promotion. If this property is left undefined, all repositories (except those specifically excluded) are included in the promotion. If one or more repositories are specifically included, all other repositories are excluded.
--exclude-repos
[Optional] A list of semicolon-separated(;) repositories to exclude from the promotion.
--project
[Optional] Project key associated with the Release Bundle version.
--server-id
[Optional] Platform Server ID configured using the 'jf config' command.
--signing-key
[Mandatory] The GPG/RSA key-pair name given in Artifactory.
--sync
[Default: true] Set to false to run asynchronously.
--promotion-type
[Default: copy] Specifies the promotion type. (Valid values: move / copy) .
Promote a Release Bundle named "myApp" version "1.0.0" to environment "PROD". Use signing key pair "myKeyPair".
jf rbp --signing-key=myKeyPair myApp 1.0.0 PROD
Promote a Release Bundle synchronously to environment "PROD". The Release Bundle is named "myApp", version "1.0.0", of project "project0". Use signing key pair "myKeyPair".
jf rbp --signing-key=myKeyPair --project=project0 --sync=true myApp 1.0.0 PROD
Promote a Release Bundle while including certain repositories.
jf rbp --signing-key=myKeyPair --include-repos="generic-local;my-repo" myApp 1.0.0 PROD
Promote a Release Bundle while excluding certain repositories.
jf rbp --signing-key=myKeyPair --exclude-repos="generic-local;my-repo" myApp 1.0.0 PROD
Promote a Release Bundle, using promotion type flag.
jf rbp --signing-key=myKeyPair --promotion-type="move" myApp 1.0.0 PROD
This command enables you to add a single tag to a Release Bundle v2 version and/or define one or more properties. The tag will appear in the Release Lifecycle kanban board. For example, if you tag all your release candidates as release-candidate
, you can filter the kanban board to display only those Release Bundle versions. Properties are user-customizable fields that can contain any string and have any value.
Command-name
release-bundle-annotate
Abbreviation
rba
Command arguments:
release bundle name
Name of the Release Bundle to annotate.
release bundle version
Version of the Release Bundle to annotate.
Command options:
--tag
[Optional] The tag is a single free-text value limited to 128 characters, beginning and ending with an alphanumeric character ([a-z0-9A-Z]), with dashes (-), underscores (_), dots (.), and alphanumerics between.
--properties
[Optional] Key-value pairs separated by a semicolon (;). Keys are limited to 255 characters. Values are limited to 2400 characters.
--del-prop
[Optional] Removes a key and all its associated values. See below.
Add or modify a tag or property.
jf rba mybundle 1.0.0 --tag=release --properties "environment=production;buildNumber=1234"
Whenever you use the --tag
command option, the value you define replaces the current value.
jf rba mybundle 1.0.0 --tag=rejected
In the example above, the tag that was defined previously (release
) is replaced with the new tag rejected
.
Whenever you use the --properties
command option with an existing key, the values that you define replace the current values.
jf rba mybundle 1.0.0 --properties "environment=DEV,PROD,QA"
In the example above, the value for environment
that was defined previously (production
) is replaced by the values DEV
, PROD
, and QA
.
To remove the tag, set it to null
or leave empty.
jf rba mybundle 1.0.0 --tag=""
To remove the values from an existing key without removing the key, leave the value empty.
jf rba mybundle 1.0.0 --properties "build=''"
In the example above, all values defined for the build
key are removed but the key is retained.
To remove a key and its associated values, use the --del-prop
command option.
jf rba mybundle 1.0.0 --del-prop "environment"
In the example above, the environment
key and all its associated values are removed.
This command distributes a Release Bundle to an Edge node.
Command-name
release-bundle-distribute
Abbreviation
rbd
Command arguments:
release bundle name
Name of the Release Bundle to distribute.
release bundle version
Version of the Release Bundle to distribute.
Command options:
--city
[Default: *] Wildcard filter for site city name.
--country-codes
[Default: *] semicolon-separated(;) list of wildcard filters for site country codes.
--create-repo
[Default: false] Set to true to create the repository on the edge if it does not exist.
--dist-rules
[Optional] Path to a file, which includes the Distribution Rules in a JSON format. See the "Distribution Rules Structure" bellow.
--dry-run
[Default: false] Set to true to disable communication with JFrog Distribution.
--mapping-pattern
[Optional] Specify along with 'mapping-target' to distribute artifacts to a different path on the Edge node. You can use wildcards to specify multiple artifacts.
--mapping-target
[Optional] The target path for distributed artifacts on the edge node. If not specified, the artifacts will have the same path and name on the edge node, as on the source Artifactory server. For flexibility in specifying the distribution path, you can include in the form of {1}, {2} which are replaced by corresponding tokens in the pattern path that are enclosed in parenthesis.
--max-wait-minutes
[Default: 60] Max minutes to wait for sync distribution.
--project
[Optional] Project key associated with the Release Bundle version.
--server-id
[Optional] Platform Server ID configured using the 'jf config' command.
--site
[Default: *] Wildcard filter for site name.
--sync
[Default: true] Set to false to run asynchronously.
Distribution Rules Structure
{
"distribution_rules": [
{
"site_name": "DC-1",
"city_name": "New-York",
"country_codes": ["1"]
},
{
"site_name": "DC-2",
"city_name": "Tel-Aviv",
"country_codes": ["972"]
}
]
}
The Distribution Rules format also supports wildcards. For example:
{
"distribution_rules": [
{
"site_name": "",
"city_name": "",
"country_codes": ["*"]
}
]
}
Distribute the Release Bundle named myApp with version 1.0.0. Use the distribution rules defined in the specified file.
jf rbd --dist-rules=/path/to/dist-rules.json myApp 1.0.0
Distribute the Release Bundle named myApp with version 1.0.0 using the default distribution rules. Map files under the source
directory to be placed under the target
directory.
jf rbd --dist-rules=/path/to/dist-rules.json --mapping-pattern="(*)/source/(*)" --mapping-target="{1}/target/{2}" myApp 1.0.0
Synchronously distribute a Release Bundle associated with project "proj"
jf rbd --dist-rules=/path/to/dist-rules.json --sync --project="proj" myApp 1.0.0
This command allows deleting all Release Bundle promotions to a specified environment or deleting a Release Bundle locally altogether. Deleting locally means distributions of the Release Bundle will not be deleted.
Command-name
release-bundle-delete-local
Abbreviation
rbdell
Command arguments:
release bundle name
Name of the Release Bundle to distribute.
release bundle version
Version of the Release Bundle to distribute.
environment
If provided, all promotions to this environment are deleted. Otherwise, the Release Bundle is deleted locally with all its promotions.
Command options:
--project
[Optional] Project key associated with the Release Bundle version.
--quiet
[Default: $CI] Set to true to skip the delete confirmation message.
--server-id
[Optional] Platform Server ID configured using the 'jf config' command.
--sync
[Default: true] Set to false to run asynchronously.
Locally delete the Release Bundle named myApp with version 1.0.0.
jf rbdell myApp 1.0.0
Locally delete the Release Bundle named myApp with version 1.0.0. Run the command synchronously and skip the confirmation message.
jf rbdell --quiet --sync myApp 1.0.0
Delete all promotions of the specified Release Bundle version to environment "PROD".
jf rbdell myApp 1.0.0 PROD
This command will delete distributions of a Release Bundle from a distribution target, such as an Edge node.
Command-name
release-bundle-delete-remote
Abbreviation
rbdelr
Command arguments:
release bundle name
Name of the Release Bundle to delete.
release bundle version
Version of the Release Bundle to delete.
Command options:
--city
[Default: *] Wildcard filter for site city name.
--country-codes
[Default: *] semicolon-separated(;) list of wildcard filters for site country codes.
--dist-rules
[Optional] Path to a file, which includes the Distribution Rules in a JSON format. See the "Distribution Rules Structure" bellow.
--dry-run
[Default: false] Set to true to disable communication with JFrog Distribution.
--max-wait-minutes
[Default: 60] Max minutes to wait for sync distribution.
--project
[Optional] Project key associated with the Release Bundle version.
--quiet
[Default: $CI] Set to true to skip the delete confirmation message.
--server-id
[Optional] Platform Server ID configured using the 'jf config' command.
--site
[Default: *] Wildcard filter for site name.
--sync
[Default: true] Set to false to run asynchronously.
Delete the distributions of version 1.0.0 of the Release Bundle named myApp from Edge nodes matching the provided distribution rules defined in the specified file.
jf rbd --dist-rules=/path/to/dist-rules.json myApp 1.0.0
Delete the distributions of the Release Bundle associated with project "proj" from the provided Edge nodes. Run the command synchronously and skip the confirmation message.
jf rbd --dist-rules=/path/to/dist-rules.json --project="proj" --quiet --sync myApp 1.0.0
Release Lifecycle Management supports distributing your Release Bundles to remote Edge nodes within an air-gapped environment. This use case is mainly intended for organizations that have two or more JFrog instances that have no network connection between them.
The following command allows exporting a Release Bundle as an archive to the filesystem that can be transferred to a different instance in an air-gapped environment.
Command-name
release-bundle-export
Abbreviation
rbe
Command arguments:
release bundle name
Name of the Release Bundle to export.
release bundle version
Version of the Release Bundle to export.
target pattern
The argument is optional and specifies the local file system target path.
If the target path ends with a slash, the path is assumed to be a directory. For example, if you specify the target as "repo-name/a/b/", then "b" is assumed to be a directory into which files should be downloaded.
If there is no terminal slash, the target path is assumed to be a file to which the downloaded file should be renamed. For example, if you specify the target as "a/b", the downloaded file is renamed to "b".
Command options:
--project
[Optional] Project key associated with the Release Bundle version.
--server-id
[Optional] Platform Server ID configured using the 'jf config' command.
mapping-pattern
[Optional] Specify a list of input regex mapping pairs that define where the queried artifact is located and where it should be placed after it is imported. Use this option if the path on the target is different than the source path.
mapping-target
[Optional] Specify a list of output regex mapping pairs that define where the queried artifact is located and where it should be placed after it is imported. Use this option if the path on the target is different than the source path.
split-count
[Optional] The maximum number of parts that can be concurrently uploaded per file during a multi-part upload. Set to 0 to disable multi-part upload.
min-split
[Optional] Minimum file size in KB to split into ranges when downloading. Set to -1 for no splits
Export version 1.0.0 of the Release Bundle named "myApp":
jf rbe myApp 1.0.0
Download the file to a specific location:
jf rbe myApp 1.0.0 /user/mybundle/
You can import a Release Bundle archive from the exported zip file.
Please note this functionality only works on Edge nodes within an air-gapped environment.
Command-name
release-bundle-import
Abbreviation
rbi
Command arguments:
path to archive
Path to the Release Bundle archive on the filesystem.
Command options:
--project
[Optional] Project key associated with the Release Bundle version.
--server-id
[Optional] Platform Server ID configured using the 'jf config' command.
Import version 1.0.0 of a Release Bundle named "myExportedApp":
jf rbi ./myExportedApp.zip
Use the following command to download the contents of a Release Bundle v2 version:
jf rt dl --bundle [release-bundle-name]/[release-bundle-version]
For more information, see Downloading Files.
JFrog CLI offers a set of commands for managing Artifactory configuration entities.
This command allows creating a bulk of users. The details of the users are provided in a CSV format file. Here's the file format.
"username","password","email"
"username1","password1","[email protected]"
"username2","password1","[email protected]"
Note: The first line in the CSV is cells' headers. It is mandatory and is used by the command to map the cell value to the users' details.
The CSV can include additional columns, with different headers, which will be ignored by the command.
Command-name
rt users-create
Abbreviation
rt uc
Command options:
--server-id
[Optional] Artifactory Server ID configured using the 'jf config' command.
--csv
[Mandatory] Path to a CSV file with the users' details. The first row of the file should include the name,password,email headers.
--replace
[Optional] Set to true if you'd like existing users or groups to be replaced.
--users-groups
[Optional] A list of comma-separated(,) groups for the new users to be associated to.
Command arguments:
The command accepts no arguments
Create new users according to details defined in the path/to/users.csv file.
jf rt users-create --csv path/to/users.csv
This command allows deleting a bulk of users. The command a list of usernames to delete. The list can be either provided as a comma-seperated argument, or as a CSV file, which includes one column with the usernames. Here's the CSV format.
"username"
"username1"
"username2"
"username2"
The first line in the CSV is cells' headers. It is mandatory and is used by the command to map the cell value to the users' details.
The CSV can include additional columns, with different headers, which will be ignored by the command.
Command-name
rt users-delete
Abbreviation
rt udel
Command options:
--server-id
[Optional] Artifactory Server ID configured using the 'jf config' command.
--csv
[Optional] Path to a csv file with the usernames to delete. The first row of the file is the reserved for the cells' headers. It must include the "username" header.
Command arguments:
users list
comma-separated(,) list of usernames to delete. If the --csv command option is used, then this argument becomes optional.
Delete the users according to the usernames defined in the path/to/users.csv file.
jf rt users-delete --csv path/to/users.csv
Delete the users according to the u1, u2 and u3 usernames.
jf rt users-delete "u1,u2,u3"
This command creates a new users group.
Command-name
rt group-create
Abbreviation
rt gc
Command options:
--server-id
[Optional] Artifactory Server ID configured using the 'jf config' command.
Command arguments:
group name
The name of the group to create.
Create a new group name reviewers .
jf rt group-create reviewers
This command adds a list fo existing users to a group.
Command-name
rt group-add-users
Abbreviation
rt gau
Command options:
--server-id
[Optional] Artifactory Server ID configured using the 'jf config' command.
Command arguments:
group name
The name of the group to add users to.
users list
Comma-seperated list of usernames to add to the specified group.
Add to group reviewers the users with the following usernames: u1, u2 and u3.
jf rt group-add-users "reviewers" "u1,u2,u3"
This command deletes a group.
Command-name
rt group-delete
Abbreviation
rt gdel
Command options:
--server-id
[Optional] Artifactory Server ID configured using the 'jf config' command.
Command arguments:
group name
The name of the group to delete.
Delete the reviewers group.
jf rt group-delete "reviewers"
JFrog CLI offers a set of commands for managing Artifactory repositories. You can create, update and delete repositories. To make it easier to manage repositories, the commands which create and update the repositories accept a pre-defined configuration template file. This template file can also include variables, which can be later replaced with values, when creating or updating the repositories. The configuration template file is created using the jf rt repo-template command.
The jf rt repo-template (or jf rt rpt) command provides interactive prompts for building a JSON configuration template.
On running jf rt repo-template <filename>.json, the CLI prompts as follows:
Select the template type. Following are the possible options:
create: Template for creating a new repository
update: Template for updating an existing repository
Select the template type (press Tab for options):
For example
Select the template type (press Tab for options): create
Enter a unique identifier for the repository key. This is the key field in the final JSON.
Insert the repository key >
For example
Insert the repository key > npm-local
Note: If you want to reuse the template for creating multiple repositories, use a variable as follows:
Insert the repository key > ${repo-key-var}
Select the repository class. Following are the possible options:
local: A physical, locally-managed repository into which you can deploy artifacts
remote: A caching proxy for a repository managed at a remote URL
Note: For remote repositories, you need to enter a remote repository url
virtual: An Aggregation of several repositories with the same package type under a common URL.
federated: A Federation is a collection of repositories of Federated type in different JPDs that are automatically configured for full bi-directional mirroring
Select the repository class (press Tab for options):
For example
Select the repository class (press Tab for options): local
Select the repository package type. Following are the possible options:
alpine
bower
chef
cocoapods
composer
conan
cran
debian
docker
gems
generic
gitlfs
go
gradle
helm
ivy
maven
npm
nuget
opkg
pypi
puppet
rpm
sbt
vagrant
Yum
Note: After selecting the repository package type, you can exit by entering :x or proceed to make advanced configurations.
For additional optional configurations to fine-tune the repository's behavior, configure the following:
This table explains the optional keys available for configuring your desired repository in JFrog Artifactory.
Configuration Key
Description
Local
Remote
Virtual
Federated
allowAnyHostAuth
Allows sending credentials to any host upon redirection.
✔️
archiveBrowseEnabled
Enables viewing archive contents (e.g., ZIPs) in the UI.
✔️
✔️
artifactoryRequestsCanRetrieveRemoteArtifacts
Allows the virtual repository to resolve artifacts from its remote members.
✔️
assumedOfflinePeriodSecs
Sets the time (in seconds) to consider a remote repository offline after a connection failure.
✔️
blackedOut
Temporarily disables the repository, blocking all traffic.
✔️
✔️
✔️
blockMismatchingMimeTypes
Blocks caching of remote files if their MIME type is incorrect.
✔️
blockPushingSchema1
Blocks pushes from older Docker v1 clients, enforcing the v2 schema.
✔️
✔️
✔️
bypassHeadRequests
Skips initial HEAD requests and sends GET requests directly to the remote repository.
✔️
cdnRedirect
Redirects client download requests to a configured Content Delivery Network (CDN).
✔️
✔️
✔️
checksumPolicyType
Defines how the server handles artifact checksums during deployment.
✔️
✔️
clientTlsCertificate
Specifies a client-side TLS certificate to use for authenticating to the remote repository.
✔️
contentSynchronisation
Configures properties for synchronizing content from a remote Artifactory instance.
✔️
defaultDeploymentRepo
Sets the default local repository for artifacts deployed to this virtual repository.
✔️
description
Provides a short, human-readable summary of the repository's purpose.
✔️
✔️
✔️
✔️
downloadRedirect
Redirects client downloads directly to the source URL instead of proxying through Artifactory.
✔️
✔️
✔️
enableCookieManagement
Enables stateful cookie handling for requests to the remote repository.
✔️
environment
Adds a tag to classify the repository for a specific lifecycle stage (e.g., dev, qa, prod).
✔️
✔️
✔️
✔️
excludesPattern
Defines a list of file path patterns to block from deployment.
✔️
✔️
✔️
✔️
failedRetrievalCachePeriodSecs
Sets the time (in seconds) to cache a "not found" response for a failed artifact download.
✔️
fetchJarsEagerly
For remote Maven repositories, proactively fetches JAR files when the index is updated.
✔️
fetchSourcesEagerly
For remote Maven repositories, proactively fetches source JARs when the index is updated.
✔️
forceMavenAuthentication
For virtual Maven repositories, requires authentication for all requests.
✔️
handleReleases
Determines if the repository can host stable, final release versions of packages.
✔️
✔️
✔️
handleSnapshots
Determines if the repository can host development or pre-release versions of packages.
✔️
✔️
✔️
hardFail
Causes requests to fail immediately upon any network error with the remote repository.
✔️
includesPattern
Defines a list of file path patterns that are allowed for deployment.
✔️
✔️
✔️
✔️
keyPair
Assigns a GPG key pair to the virtual repository for signing metadata files.
✔️
localAddress
Binds outgoing connections to the remote repository to a specific local IP address.
✔️
maxUniqueSnapshots
Limits the number of unique snapshot or pre-release versions stored for an artifact.
✔️
✔️
missedRetrievalCachePeriodSecs
Sets the time (in seconds) to cache a "not found" response for remote repository metadata.
✔️
notes
Offers a space for longer, more detailed information about the repository.
✔️
✔️
✔️
✔️
offline
Prevents Artifactory from making any network connections to the remote repository.
✔️
optionalIndexCompressionFormats
Defines additional compression formats for repository index files to support various clients.
✔️
✔️
✔️
password
Sets the password for authenticating to the remote repository.
✔️
pomRepositoryReferencesCleanupPolicy
For virtual Maven repositories, controls how to handle repository references in POM files.
✔️
primaryKeyPairRef
Specifies the primary GPG key pair to use for signing metadata.
✔️
priorityResolution
Gives a repository priority during package resolution within a virtual repository.
✔️
✔️
✔️
projectKey
Links the repository to a specific Project for organization and permission management.
✔️
✔️
✔️
✔️
propertySets
Associates required metadata fields (properties) with the repository to enforce governance.
✔️
✔️
✔️
proxy
Specifies a pre-configured network proxy to use for requests to the remote repository.
✔️
rejectInvalidJars
For remote Java-based repositories, rejects and does not cache invalid or corrupt JAR files.
✔️
remoteRepoChecksumPolicyType
Defines how to handle checksums for artifacts downloaded from the remote repository.
✔️
repoLayoutRef
Assigns a folder structure layout to the repository, enabling metadata parsing.
✔️
✔️
✔️
✔️
repositories
Defines the list of underlying repositories aggregated by the virtual repository.
✔️
retrievalCachePeriodSecs
Sets the time (in seconds) to cache metadata for successfully downloaded remote artifacts.
✔️
shareConfiguration
Shares the remote repository's configuration with other federated Artifactory instances.
✔️
snapshotVersionBehavior
Defines how the server stores and manages snapshot versions.
✔️
✔️
socketTimeoutMillis
Sets the timeout (in milliseconds) for network connections to the remote repository.
✔️
storeArtifactsLocally
Controls whether artifacts downloaded from the remote repository are cached locally.
✔️
suppressPomConsistencyChecks
For Maven repositories, disables validation checks on deployed POM files.
✔️
✔️
✔️
synchronizeProperties
Synchronizes artifact properties from a remote Artifactory instance.
✔️
unusedArtifactsCleanupEnabled
Enables the automatic cleanup of unused cached artifacts from the remote repository.
✔️
unusedArtifactsCleanupPeriodHours
Sets the time (in hours) an unused cached artifact must wait before cleanup.
✔️
After adding your desired configurations, enter :x to save the template file. Creates and saves a json template and exits the interactive terminal.
The sample json template is as follows:
{
"description": "my npm local repository",
"key": "my-npm-local", # example variable, ${repo-name}
"packageType": "npm",
"rclass": "local"
}
Reuse Template with Variables
Reuse the template by adding variables to the keys and provide value explicitly while executing the jf rt repo-create command.
For example
jf rt repo-create repotemplate.json --vars "repo-name=my-npm-local"
If you want to pass multiple vars, enter the list of semicolon-separated(;) variables in the form of "key1=value1;key2=value2;..." (wrapped by quotes) to be replaced in the template. In the template, the variables should be used as follows: ${key1}.
jf rt repo-create repotemplate.json --vars "repo-name=my-npm-local;package-type=npm;repo-type=local"
These two commands create a new repository and updates an existing a repository. Both commands accept as an argument a configuration template, which can be created by the jf rt repo-template command. The template also supports variables, which can be replaced with values, provided when it is used.
Command-name
rt repo-create / rt repo-update
Abbreviation
rt rc / rt ru
Command options:
--server-id
[Optional] Artifactory Server ID configured using the 'jf config' command.
--vars
[Optional] List of semicolon-separated(;) variables in the form of "key1=value1;key2=value2;..." to be replaced in the template. In the template, the variables should be used as follows: ${key1}.
Command arguments:
template path
Specifies the local file system path for the template file to be used for the repository creation. The template can be created using the "jf rt rpt" command.
Example 1
Create a repository, using the template.json file previously generated by the repo-template command.
jf rt repo-create template.json
Example 2
Update a repository, using the template.json file previously generated by the repo-template command.
jf rt repo-update template.json
Example 3
Update a repository, using the template.json file previously generated by the repo-template command. Replace the repo-name variable inside the template with a name for the updated repository.
jf rt repo-update template.json --vars "repo-name=my-repo"
This command permanently deletes a repository, including all of its content.
Command name
rt repo-delete
Abbreviation
rt rdel
Command options:
--server-id
[Optional] Artifactory Server ID configured using the 'jf config' command.
--quiet
[Default: $CI] Set to true to skip the delete confirmation message.
Command arguments:
repository key
Specifies the repositories that should be removed. You can use wildcards to specify multiple repositories.
Delete a repository from Artifactory.
jf rt repo-delete generic-local
JFrog CLI offers commands creating and deleting replication jobs in Artifactory. To make it easier to create replication jobs, the commands which creates the replication job accepts a pre-defined configuration template file. This template file can also include variables, which can be later replaced with values, when creating the replication job. The configuration template file is created using the jf rt replication-template command.
This command creates a configuration template file, which will be used as an argument for the jf rt replication-create command.
When using this command to create the template, you can also provide replaceable variable, instead of fixes values. Then, when the template is used to create replication jobs, values can be provided to replace the variables in the template.
Command-name
rt replication-template
Abbreviation
rt rplt
Command options:
The command has no options.
Command arguments:
template path
Specifies the local file system path for the template file created by the command. The file should not exist.
Create a configuration template, with two variables for the source and target repositories. Then, create a replication job using this template, and provide source and target repository names to replace the variables.
$ jf rt rplt template.json
Select replication job type (press Tab for options): push
Enter source repo key > ${source}
Enter target repo key > ${target}
Enter target server id (press Tab for options): my-server-id
Enter cron expression for frequency (for example: 0 0 12 * * ? will replicate daily) > 0 0 12 * * ?
You can type ":x" at any time to save and exit.
Select the next property > :x
[Info] Replication creation config template successfully created at template.json.
$
$ jf rt rplc template.json --vars "source=generic-local;target=generic-local"
[Info] Done creating replication job.
This command creates a new replication job for a repository. The command accepts as an argument a configuration template, which can be created by the jf rt replication-template command. The template also supports variables, which can be replaced with values, provided when it is used.
Command-name
replication-create
Abbreviation
rt rplc
Command options:
--server-id
[Optional] Artifactory Server ID configured using the 'jf config' command.
--vars
[Optional] List of semicolon-separated(;) variables in the form of "key1=value1;key2=value2;..." to be replaced in the template. In the template, the variables should be used as follows: ${key1}.
Command arguments:
template path
Specifies the local file system path for the template file to be used for the replication job creation. The template can be created using the "jf rt rplt" command.
Example 1
Create a replication job, using the template.json file previously generated by the replication-template command.
jf rt rplc template.json
Example 2
Update a replication job, using the template.json file previously generated by the replication-template command. Replace the source and target variables inside the template with the names of the replication source and target repositories.
jf rt rplc template.json --vars "source=my-source-repo;target=my-target-repo"
This command permanently deletes a replication jobs from a repository.
Command name
rt replication-delete
Abbreviation
rt rpldel
Command options:
--server-id
[Optional] Artifactory Server ID configured using the 'jf config' command.
--quiet
[Default: $CI] Set to true to skip the delete confirmation message.
Command arguments:
repository key
The repository from which the replications will be deleted.
Delete a repository from Artifactory.
jf rt rpldel my-repo-name
JFrog CLI offers commands creating, updating and deleting permission targets in Artifactory. To make it easier to create and update permission targets, the commands which create and update the permission targets accept a pre-defined configuration template file. This template file can also include variables, which can be later replaced with values, when creating or updating the permission target. The configuration template file is created using the jf rt permission-target-template command.
This command creates a configuration template file, which will be used as an argument for the jf rt permission-target-create and jf rt permission-target-update commands.
Command-name
rt permission-target-template
Abbreviation
rt ptt
Command options:
The command has no options.
Command arguments:
template path
Specifies the local file system path for the template file created by the command. The file should not exist.
These commands create/update a permission target. The commands accept as an argument a configuration template, which should be created by the jf rt permission-target-template command beforehand. The template also supports variables, which can be replaced with values, provided when it is used.
Command-name
permission-target-create / permission-target-update
Abbreviation
rt ptc / rt ptu
Command arguments:
template path
Specifies the local file system path for the template file to be used for the permission target creation or update. The template should be created using the "jf rt ptt" command.
Command-name
permission-target-create / permission-target-update
Abbreviation
rt ptc / rt ptu
Command options:
--server-id
[Optional] Artifactory Server ID configured using the 'jf config' command.
--vars
[Optional] List of semicolon-separated(;) variables in the form of "key1=value1;key2=value2;..." to be replaced in the template. In the template, the variables should be used as follows: ${key1}.
This command permanently deletes a permission target.
Command name
rt permission-target-delete
Abbreviation
rt ptdel
Command options:
--server-id
[Optional] Artifactory Server ID configured using the 'jf config' command.
--quiet
[Default: $CI] Set to true to skip the delete confirmation message.
Command arguments:
permission target name
The permission target that should be removed.
This command is used to upload files to Artifactory.
jf rt u [command options] <Source path> <Target path>
jf rt u --spec=<File Spec path> [command options]
Upload a file called froggy.tgz to the root of the my-local-repo repository.
Collect all the zip files located under the build directory (including subdirectories), and upload them to the my-local-repo repository, under the zipFiles folder, while maintaining the original names of the files.
Collect all the zip files located under the build directory (including subdirectories), and upload them to the my-local-repo repository, under the zipFiles folder, while maintaining the original names of the files. Also delete all files in the my-local-repo repository, under the zipFiles folder, except for the files which were uploaded by this command.
Collect all files located under the build directory (including subdirectories), and upload them to the my-release-local repository, under the files folder, while maintaining the original names of the artifacts. Exclude (do not upload) files, which include install as part of their path, and have the pack extension. This example uses a wildcard pattern. See Example 5, which uses regular expressions instead.
Collect all files located under the build directory (including subdirectories), and upload them to the my-release-local repository, under the files folder, while maintaining the original names of the artifacts. Exclude (do not upload) files, which include install as part of their path, and have the pack extension. This example uses a regular expression. See Example 4, which uses a wildcard pattern instead.
Collect all files located under the build directory and match the /*.zip ANT pattern, and upload them to the my-release-local repository, under the files folder, while maintaining the original names of the artifacts.
Package all files located under the build directory (including subdirectories) into a zip archive named archive.zip , and upload the archive to the my-local-repo repository,
This command is used to download files from Artifactory.
Download from Remote Repositories: By default, the command downloads only the files that are cached on the current Artifactory instance. It does not retrieve files from remote Artifactory instances accessed via remote or virtual repositories. To enable the command to download files from remote Artifactory instances (proxied through remote repositories), set the JFROG_CLI_TRANSITIVE_DOWNLOAD environment variable to true. This feature is available in Artifactory version 7.17 or later. Note that remote downloads are supported only for remote repositories that proxy other Artifactory instances. Downloads from remote repositories that proxy non-Artifactory repositories are not supported. IMPORTANT: Enabling the JFROG_CLI_TRANSITIVE_DOWNLOAD environment variable may increase the load on the remote Artifactory instance. It is advisable to use this setting cautiously.
jf rt dl [command options] <Source path> [Target path]
jf rt dl --spec=<File Spec path> [command options]
Download an artifact called cool-froggy.zip located at the root of the my-local-repo repository to the current directory.
Download all artifacts located under the all-my-frogs directory in the my-local-repo repository to the all-my-frogs folder under the current directory.
Download all artifacts located in the **my-local-repo **repository with a jar extension to the all-my-frogs folder under the current directory.
Download the latest file uploaded to the all-my-frogs folder in the my-local-repo repository.
This command is used to copy files in Artifactory
jf rt cp [command options] <Source path> <Target path>
jf rt cp --spec=<File Spec path> [command options]
Copy all artifacts located under /rabbit in the source-frog-repo repository into the same path in the target-frog-repo repository.
Copy all zip files located under /rabbit in the source-frog-repo repository into the same path in the target-frog-repo repository.
Copy all artifacts located under /rabbit in the source-frog-repo repository and with property "Version=1.0" into the same path in the target-frog-repo repository.
Copy all artifacts located under /rabbit in the source-frog-repo repository into the same path in the target-frog-repo repository without maintaining the original subdirectory hierarchy.
This command is used to move files in Artifactory
jf rt mv [command options] <Source path> <Target path>
jf rt mv --spec=<File Spec path> [command options]
Move all artifacts located under /rabbit in the source-frog-repo repository into the same path in the target-frog-repo repository.
Move all zip files located under /rabbit in the source-frog-repo repository into the same path in the target-frog-repo repository.
Move all artifacts located under /rabbit in the source-frog-repo repository and with property "Version=1.0" into the same path in the target-frog-repo repository .
Move all artifacts located under /rabbit in the source-frog-repo repository into the same path in the target-frog-repo repository without maintaining the original subdirectory hierarchy.
This command is used to delete files in Artifactory
jf rt del [command options] <Delete path>
jf rt del --spec=<File Spec path> [command options]
Delete all artifacts located under /rabbit in the frog-repo repository.
Delete all zip files located under /rabbit in the frog-repo repository.
This command is used to search and display files in Artifactory.
jf rt s [command options] <Search path>
jf rt s --spec=<File Spec path> [command options]
Display a list of all artifacts located under /rabbit in the frog-repo repository.
Display a list of all zip files located under /rabbit in the frog-repo repository.
Display a list of the files under example-repo-local with the following fields: path, actual_md5, modified_b, updated and depth.
This command is used for setting properties on existing files in Artifactory.
jf rt sp [command options] <Files pattern> <Files properties>
jf rt sp <artifact properties> --spec=<File Spec path> [command options]
Set the properties on all the zip files in the generic-local repository. The command will set the property "a" with "1" value and the property "b" with two values: "2" and "3".
The command will set the property "a" with "1" value and the property "b" with two values: "2" and "3" on all files found by the File Spec my-spec.
Set the properties on all the jar files in the maven-local repository. The command will set the property "version" with "1.0.0" value and the property "release" with "stable" value.
The command will set the property "environment" with "production" value and the property "team" with "devops" value on all files found by the File Spec prod-spec.
Set the properties on all the tar.gz files in the devops-local repository. The command will set the property "build" with "102" value and the property "branch" with "main" value.
This command is used for deleting properties from existing files in Artifactory.
jf rt delp [command options] <Files pattern> <Properties list>
jf rt delp <artifact properties> --spec=<File Spec path> [command options]
Remove the properties version
and release
from all the jar files in the maven-local repository.
Delete the properties build
and branch
from all tar.gz files in the devops-local repo.
Remove the properties status
, phase
and stage
from all deb files that start with DEV in the debian-repository.
Delete the environment
property from /tests/local/block.rpm
in the centos-repo.
Remove the properties component
, layer
and level
from files in the docker-hub repository.
Command name
rt upload
Abbreviation
rt u
Command arguments:
The command takes two arguments, source path and target path. In case the --spec option is used, the commands accept no arguments.
Source path
The first argument specifies the local file system path to artifacts that should be uploaded to Artifactory. You can specify multiple artifacts by using wildcards or a regular expression as designated by the --regexp command option. Please read the --regexp option description for more information.
Target path
The second argument specifies the target path in Artifactory in the following format: [repository name]/[repository path]
If the target path ends with a slash, the path is assumed to be a folder. For example, if you specify the target as "repo-name/a/b/", then "b" is assumed to be a folder in Artifactory into which files should be uploaded. If there is no terminal slash, the target path is assumed to be a file to which the uploaded file should be renamed. For example, if you specify the target as "repo-name/a/b", the uploaded file is renamed to "b" in Artifactory.
For flexibility in specifying the upload path, you can include placeholders in the form of {1}, {2} which are replaced by corresponding tokens in the source path that are enclosed in parenthesis. For more details, please refer to Using Placeholders.
Command options:
When using the * or ; characters in the upload command options or arguments, make sure to wrap the whole options or arguments string in quotes (") to make sure the * or ; characters are not interpreted as literals.
--archive
[Optional] Set to "zip" to pack and deploy the files to Artifactory inside a ZIP archive. Currently, the only packaging format supported is zip.
--server-id
[Optional] Server ID configured using the jf c add command. If not specified, the default configured Artifactory server is used.
--spec
[Optional] Path to a file spec. For more details, please refer to Using File Specs.
--spec-vars
[Optional] List of semicolon-separated(;) variables in the form of "key1=value1;key2=value2;..." to be replaced in the File Spec. In the File Spec, the variables should be used as follows: ${key1}.
--build-name
[Optional] Build name. For more details, please refer to Build Integration.
--build-number
[Optional] Build number. For more details, please refer to Build Integration.
--project
[Optional] JFrog project key.
--module
[Optional] Optional module name for the build-info.
--target-props
[Optional] List of semicolon-separated(;) Artifactory properties specified as "key=value" pairs to be attached to the uploaded files.(for example: "key1=value1;key2=value21,value22;key3=value3".
--deb
[Optional] Used for Debian packages only. Specifies the distribution/component/architecture of the package. If the the value for distribution, component or architecture include a slash. the slash should be escaped with a back-slash.
--flat
[Default: false] If true, files are uploaded to the exact target path specified and their hierarchy in the source file system is ignored. If false, files are uploaded to the target path while maintaining their file system hierarchy. If Using Placeholders are used, the value of this option is ignored. Note JFrog CLI v1 In JFrog CLI v1, the default value of the --flat option is true.
--recursive
[Default: true] If true, files are also collected from sub-folders of the source directory for upload . If false, only files specifically in the source directory are uploaded.
--regexp
[Default: false] If true, the command will interpret the first argument, which describes the local file-system path of artifacts to upload, as a regular expression. If false, it will interpret the first argument as a wild-card expression. The above also applies for the --exclusions option. If you have specified that you are using regular expressions, then the beginning of the expression must be enclosed in parenthesis. For example: a/b/c/(.*)/file.zip
--ant
[Default: false] If true, the command will interpret the first argument, which describes the local file-system path of artifacts to upload, as an ANT pattern. If false, it will interpret the first argument as a wildcards expression. The above also applies for the --exclusions option.
--threads
[Default: 3] The number of parallel threads that should be used to upload where each thread uploads a single artifact at a time.
--dry-run
[Default: false] If true, the command only indicates which artifacts would have been uploaded If false, the command is fully executed and uploads artifacts as specified
--symlinks
[Default: false]
If true, the command will preserve the soft links structure in Artifactory. The symlink
file representation will contain the symbolic link and checksum properties.
--explode
[Default: false] If true, the command will extract an archive containing multiple artifacts after it is deployed to Artifactory, while maintaining the archive's file structure.
--include-dirs
[Default: false] If true, the source path applies to bottom-chain directories and not only to files. Bottom-chain directories are either empty or do not include other directories that match the source path.
--exclusions
[Optional] A list of semicolon-separated(;) exclude patterns. Allows using wildcards, regular expressions or ANT patterns, according to the value of the --regexp and --ant options. Please read the --regexp and --ant options description for more information.
--sync-deletes
[Optional] Specific path in Artifactory, under which to sync artifacts after the upload. After the upload, this path will include only the artifacts uploaded during this upload operation. The other files under this path will be deleted.
--quiet
[Default: false] If true, the delete confirmation message is skipped.
--fail-no-op
[Default: false] Set to true if you'd like the command to return exit code 2 in case of no files are affected.
--retries
[Default: 3] Number of upload retries.
--retry-wait-time
[Default: 0s] Number of seconds or milliseconds to wait between retries. The numeric value should either end with s for seconds or ms for milliseconds (for example: 10s or 100ms).
--detailed-summary
[Default: false] Set to true to include a list of the affected files as part of the command output summary.
--insecure-tls
[Default: false] Set to true to skip TLS certificates verification.
--chunk-size
[Default: 20] The upload chunk size in MiB that can be concurrently uploaded during a multi-part upload. This option, as well as the functionality of multi-part upload, requires Artifactory with S3 or GCP storage.
--min-split
[Default: 200] The minimum file size in MiB required to attempt a multi-part upload. This option, as well as the functionality of multi-part upload, requires Artifactory with S3 or GCP storage.
--split-count
[Default: 5] The maximum number of parts that can be concurrently uploaded per file during a multi-part upload. Set to 0 to disable multi-part upload. This option, as well as the functionality of multi-part upload, requires Artifactory with S3 or GCP storage.
jf rt u froggy.tgz my-local-repo
jf rt u "build/*.zip" my-local-repo/zipFiles/
jf rt u "build/*.zip" my-local-repo/zipFiles/ --sync-deletes="my-local-repo/zipFiles/"
jf rt u "build/" my-release-local/files/ --exclusions="\*install\*pack*"
jf rt u "build/" my-release-local/files/ --regexp --exclusions="(.*)install.*pack$"
jf rt u "build/**/*.zip" my-release-local/files/ --ant
jf rt u "build/" my-local-repo/my-archive.zip --archive zip
Command name
rt download
Abbreviation
rt dl
Command arguments:
The command takes two arguments source path and target path (Optional). In case the --spec option is used, the commands accept no arguments.
Source path
Specifies the source path in Artifactory, from which the artifacts should be downloaded. You can use wildcards to specify multiple artifacts.
Target path
The second argument is optional and specifies the local file system target path. If the target path ends with a slash, the path is assumed to be a directory. For example, if you specify the target as "repo-name/a/b/", then "b" is assumed to be a directory into which files should be downloaded. If there is no terminal slash, the target path is assumed to be a file to which the downloaded file should be renamed. For example, if you specify the target as "a/b", the downloaded file is renamed to "b". For flexibility in specifying the target path, you can include placeholders in the form of {1}, {2} which are replaced by corresponding tokens in the source path that are enclosed in parenthesis. For more details, please refer to Using Placeholders.
Command options:
When using the * or ; characters in the download command options or arguments, make sure to wrap the whole options or arguments string in quotes (") to make sure the * or ; characters are not interpreted as literals.
--server-id
[Optional] Server ID configured using the 'jf config' command. If not specified, the default configured Artifactory server is used.
--build-name
[Optional] Build name. For more details, please refer to Build Integration.
--build-number
[Optional] Build number. For more details, please refer to Build Integration.
--project
[Optional] JFrog project key.
--module
[Optional] Optional module name for the build-info.
--spec
[Optional] Path to a file spec. For more details, please refer to Using File Specs.
--spec-vars
[Optional] List of semicolon-separated(;) variables in the form of "key1=value1;key2=value2;..." to be replaced in the File Spec. In the File Spec, the variables should be used as follows: ${key1}.
--props
[Optional] A list of Artifactory properties specified as semicolon-separated(;) "key=value" pairs (for example: "key1=value1;key2=value2;key3=value3"). Only artifacts with all of the specified properties names and values will be downloaded.
--exclude-props
[Optional] A list of Artifactory properties specified as semicolon-separated(;) "key=value" pairs (for example: "key1=value1;key2=value2;key3=value3"). Only artifacts without all of the specified properties names and values will be downloaded.
--build
[Optional] If specified, only artifacts of the specified build are matched. The property format is build-name/build-number. If you do not specify the build number, the artifacts are filtered by the latest build number.
--bundle
[Optional] If specified, only artifacts of the specified Release Bundle (v1 or v2) are matched. The value format is bundle-name/bundle-version. If Release Bundles with the same name and version exist for both v1 and v2, the contents of the Release Bundle v2 version are downloaded.
--flat
[Default: false] If true, artifacts are downloaded to the exact target path specified and their hierarchy in the source repository is ignored. If false, artifacts are downloaded to the target path in the file system while maintaining their hierarchy in the source repository. If Using Placeholders are used, and you would like the local file system (download path) to be determined by placeholders only, or in other words, avoid concatenating the Artifactory folder hierarchy local, set to false.
--recursive
[Default: true] If true, artifacts are also downloaded from sub-paths under the specified path in the source repository. If false, only artifacts in the specified source path directory are downloaded.
--threads
[Default: 3] The number of parallel threads that should be used to download where each thread downloads a single artifact at a time.
--split-count
[Default: 3]
The number of segments into which each file should be split for download (provided the artifact is over --min-split
in size). To download each file in a single thread, set to 0.
--retries
[Default: 3] Number of download retries.
--retry-wait-time
[Default: 0s] Number of seconds or milliseconds to wait between retries. The numeric value should either end with s for seconds or ms for milliseconds (for example: 10s or 100ms).
--min-split
[Default: 5120]
The minimum size permitted for splitting. Files larger than the specified number will be split into equally sized --split-count
segments. Any files smaller than the specified number will be downloaded in a single thread. If set to -1, files are not split.
--dry-run
[Default: false] If true, the command only indicates which artifacts would have been downloaded. If false, the command is fully executed and downloads artifacts as specified.
--explode
[Default: false] Set to true to extract an archive after it is downloaded from Artifactory. Supported compression formats: br, bz2, gz, lz4, sz, xz, zstd. Supported archive formats: zip, tar (including any compressed variants like tar.gz), rar.
--bypass-archive-inspection
[Default: false]
Set to true to bypass the archive security inspection before it is unarchived. Used with the explode
option.
'--validate-symlinks'
[Default: false] If true, the command will validate that symlinks are pointing to existing and unchanged files, by comparing their sha1. Applicable to files and not directories.
--include-dirs
[Default: false] If true, the source path applies to bottom-chain directories and not only to files. Bottom-chain directories are either empty or do not include other directories that match the source path.
--exclusions
A list of semicolon-separated(;) exclude patterns. Allows using wildcards.
--sync-deletes
[Optional] Specific path in the local file system, under which to sync dependencies after the download. After the download, this path will include only the dependencies downloaded during this download operation. The other files under this path will be deleted.
--quiet
[Default: false] If true, the delete confirmation message is skipped.
--sort-by
[Optional]
A list of semicolon-separated(;) fields to sort by. The fields must be part of the items
AQL domain. For more information read the AQL documentation
--sort-order
[Default: asc]
The order by which fields in the 'sort-by' option should be sorted. Accepts asc
or desc
.
--limit
[Optional] The maximum number of items to fetch. Usually used with the 'sort-by' option.
--offset
[Optional] The offset from which to fetch items (i.e. how many items should be skipped). Usually used with the 'sort-by' option.
--fail-no-op
[Default: false] Set to true if you'd like the command to return exit code 2 in case of no files are affected.
--archive-entries
[Optional] This option is no longer supported since version 7.90.5 of Artifactory. If specified, only archive artifacts containing entries matching this pattern are matched. You can use wildcards to specify multiple artifacts.
--detailed-summary
[Default: false] Set to true to include a list of the affected files as part of the command output summary.
--insecure-tls
[Default: false] Set to true to skip TLS certificates verification.
--gpg-key
[Optional] Path to the public GPG key file located on the file system, used to validate downloaded release bundle files.
jf rt dl my-local-repo/cool-froggy.zip
jf rt dl my-local-repo/all-my-frogs/ all-my-frogs/
jf rt dl "my-local-repo/*.jar" all-my-frogs/
jf rt dl "my-local-repo/all-my-frogs/" --sort-by=created --sort-order=desc --limit=1
Command name
rt copy
Abbreviation
rt cp
Command arguments:
The command takes two arguments source path and target path. In case the --spec option is used, the commands accept no arguments.
Source path
Specifies the source path in Artifactory, from which the artifacts should be copied, in the following format: [repository name]/[repository path].
You can use wildcards to specify multiple artifacts.
Target path
Specifies the target path in Artifactory, to which the artifacts should be copied, in the following format: [repository name]/[repository path]
By default the Target Path maintains the source path hierarchy, see --flat flag for more info. If the pattern ends with a slash, the target path is assumed to be a folder. For example, if you specify the target as "repo-name/a/b/", then "b" is assumed to be a folder in Artifactory into which files should be copied. If there is no terminal slash, the target path is assumed to be a file to which the copied file should be renamed. For example, if you specify the target as "repo-name/a/b", the copied file is renamed to "b" in Artifactory.
For flexibility in specifying the target path, you can include placeholders in the form of {1}, {2} which are replaced by corresponding tokens in the source path that are enclosed in parenthesis. For more details, please refer to Using Placeholders.
Command options:
When using the * or ; characters in the copy command options or arguments, make sure to wrap the whole options or arguments string in quotes (") to make sure the * or ; characters are not interpreted as literals.
--server-id
[Optional] Server ID configured using the 'jf config' command. If not specified, the default configured Artifactory server is used.
--spec
[Optional] Path to a file spec. For more details, please refer to Using File Specs.
--props
[Optional] A list of Artifactory properties specified as semicolon-separated(;) "key=value" pairs. (for example: "key1=value1;key2=value2;key3=value3"). Only artifacts with these properties names and values will be copied.
--exclude-props
[Optional] A list of Artifactory properties specified as semicolon-separated(;) "key=value" pairs (for example: "key1=value1;key2=value2;key3=value3"). Only artifacts without all of the specified properties names and values will be copied.
--build
[Optional] If specified, only artifacts of the specified build are matched. The property format is build-name/build-number. If you do not specify the build number, the artifacts are filtered by the latest build number.
--bundle
[Optional] If specified, only artifacts of the specified bundle are matched. The value format is bundle-name/bundle-version.
--flat
[Default: false] If true, artifacts are copied to the exact target path specified and their hierarchy in the source path is ignored. If false, artifacts are copied to the target path while maintaining their source path hierarchy.
--recursive
[Default: true] If true, artifacts are also copied from sub-paths under the specified source path. If false, only artifacts in the specified source path directory are copied.
--dry-run
[Default: false] If true, the command only indicates which artifacts would have been copied. If false, the command is fully executed and copies artifacts as specified.
--exclusions
A list of semicolon-separated(;) exclude patterns. Allows using wildcards.
--threads
[Default: 3] Number of threads used for copying the items.
--sort-by
[Optional]
A list of semicolon-separated(;) fields to sort by. The fields must be part of the items
AQL domain. For more information read the AQL documentation
--sort-order
[Default: asc]
The order by which fields in the 'sort-by' option should be sorted. Accepts asc
or desc
.
--limit
[Optional] The maximum number of items to fetch. Usually used with the 'sort-by' option.
--offset
[Optional] The offset from which to fetch items (i.e. how many items should be skipped). Usually used with the 'sort-by' option.
--fail-no-op
[Default: false] Set to true if you'd like the command to return exit code 2 in case of no files are affected.
--archive-entries
[Optional] This option is no longer supported since version 7.90.5 of Artifactory. If specified, only archive artifacts containing entries matching this pattern are matched. You can use wildcards to specify multiple artifacts.
--insecure-tls
[Default: false] Set to true to skip TLS certificates verification.
--retries
[Default: 3] Number for HTTP retry attempts.
--retry-wait-time
[Default: 0s] Number of seconds or milliseconds to wait between retries. The numeric value should either end with s for seconds or ms for milliseconds (for example: 10s or 100ms).
jf rt cp source-frog-repo/rabbit/ target-frog-repo/rabbit/
jf rt cp "source-frog-repo/rabbit/*.zip" target-frog-repo/rabbit/
jf rt cp "source-frog-repo/rabbit/*" target-frog-repo/rabbit/ --props=Version=1.0
jf rt cp "source-frog-repo/rabbit/*" target-frog-repo/rabbit/ --flat
Command name
rt move
Abbreviation
rt mv
Command arguments:
The command takes two arguments source path and target path. In case the --spec option is used, the commands accept no arguments.
Source path
Specifies the source path in Artifactory, from which the artifacts should be moved, in the following format: [repository name]/[repository path].
You can use wildcards to specify multiple artifacts.
Target path
Specifies the target path in Artifactory, to which the artifacts should be moved, in the following format: [repository name]/[repository path]
By default the Target Path maintains the source path hierarchy, see --flat flag for more info. If the pattern ends with a slash, the target path is assumed to be a folder. For example, if you specify the target as "repo-name/a/b/", then "b" is assumed to be a folder in Artifactory into which files should be moved. If there is no terminal slash, the target path is assumed to be a file to which the moved file should be renamed. For example, if you specify the target as "repo-name/a/b", the moved file is renamed to "b" in Artifactory.
For flexibility in specifying the upload path, you can include placeholders in the form of {1}, {2} which are replaced by corresponding tokens in the source path that are enclosed in parenthesis. For more details, please refer to Using Placeholders.
Command options:
When using the * or ; characters in the copy command options or arguments, make sure to wrap the whole options or arguments string in quotes (") to make sure the * or ; characters are not interpreted as literals.
--server-id
[Optional] Server ID configured using the 'jf config' command. If not specified, the default configured Artifactory server is used.
--spec
[Optional] Path to a file spec. For more details, please refer to Using File Specs.
--props
[Optional] A list of Artifactory properties specified as semicolon-separated(;) "key=value" pairs (for example: "key1=value1;key2=value2;key3=value3"). Only artifacts with these properties names and values will be moved.
--exclude-props
[Optional] A list of Artifactory properties specified as semicolon-separated(;) "key=value" pairs (for example: "key1=value1;key2=value2;key3=value3"). Only artifacts without all of the specified properties names and values will be moved.
--build
[Optional] If specified, only artifacts of the specified build are matched. The property format is build-name/build-number. If you do not specify the build number, the artifacts are filtered by the latest build number.
--bundle
[Optional] If specified, only artifacts of the specified bundle are matched. The value format is bundle-name/bundle-version.
--flat
[Default: false] If true, artifacts are moved to the exact target path specified and their hierarchy in the source path is ignored. If false, artifacts are moved to the target path while maintaining their source path hierarchy.
--recursive
[Default: true] If true, artifacts are also moved from sub-paths under the specified source path. If false, only artifacts in the specified source path directory are moved.
--dry-run
[Default: false] If true, the command only indicates which artifacts would have been moved. If false, the command is fully executed and downloads artifacts as specified.
--exclusions
A list of semicolon-separated(;) exclude patterns. Allows using wildcards.
--threads
[Default: 3] Number of threads used for moving the items.
--sort-by
[Optional]
A list of semicolon-separated(;) fields to sort by. The fields must be part of the items
AQL domain. For more information read the AQL documentation
--sort-order
[Default: asc]
The order by which fields in the 'sort-by' option should be sorted. Accepts asc
or desc
.
--limit
[Optional] The maximum number of items to fetch. Usually used with the 'sort-by' option.
--offset
[Optional] The offset from which to fetch items (i.e. how many items should be skipped). Usually used with the 'sort-by' option.
--fail-no-op
[Default: false] Set to true if you'd like the command to return exit code 2 in case of no files are affected.
--archive-entries
[Optional] This option is no longer supported since version 7.90.5 of Artifactory. If specified, only archive artifacts containing entries matching this pattern are matched. You can use wildcards to specify multiple artifacts.
--insecure-tls
[Default: false] Set to true to skip TLS certificates verification.
--retries
[Default: 3] Number of HTTP retry attempts.
--retry-wait-time
[Default: 0s] Number of seconds or milliseconds to wait between retries. The numeric value should either end with s for seconds or ms for milliseconds (for example: 10s or 100ms).
jf rt mv source-frog-repo/rabbit/ target-frog-repo/rabbit/
jf rt mv "source-frog-repo/rabbit/*.zip" target-frog-repo/rabbit/
jf rt mv "source-frog-repo/rabbit/*" target-frog-repo/rabbit/ --props=Version=1.0
jf rt mv "source-frog-repo/rabbit/*" target-frog-repo/rabbit/ --flat
Command name
rt delete
Abbreviation
rt del
Command arguments:
The command takes one argument which is the delete path. In case the --spec option is used, the commands accept no arguments.
Delete path
Specifies the path in Artifactory of the files that should be deleted in the following format: [repository name]/[repository path].
You can use wildcards to specify multiple artifacts.
Command options:
When using the * or ; characters in the delete command options or arguments, make sure to wrap the whole options or arguments string in quotes (") to make sure the * or ; characters are not interpreted as literals.
--server-id
[Optional] Server ID configured using the 'jf config' command. If not specified, the default configured Artifactory server is used.
--spec
[Optional] Path to a file spec. For more details, please refer to Using File Specs.
--props
[Optional] A list of Artifactory properties specified as semicolon-separated(;) "key=value" pairs (for example: "key1=value1;key2=value2;key3=value3"). Only artifacts with these properties names and values will be deleted.
--exclude-props
[Optional] A list of Artifactory properties specified as semicolon-separated(;) "key=value" pairs (for example: "key1=value1;key2=value2;key3=value3"). Only artifacts without all of the specified properties names and values will be deleted.
--build
[Optional] If specified, only artifacts of the specified build are matched. The property format is build-name/build-number. If you do not specify the build number, the artifacts are filtered by the latest build number.
--bundle
[Optional] If specified, only artifacts of the specified bundle are matched. The value format is bundle-name/bundle-version.
--recursive
[Default: true] If true, artifacts are also deleted from sub-paths under the specified path.
--quiet
[Default: false] If true, the delete confirmation message is skipped.
--dry-run
[Default: false] If true, the command only indicates which artifacts would have been deleted. If false, the command is fully executed and deletes artifacts as specified.
--exclusions
A list of semicolon-separated(;) exclude patterns. Allows using wildcards.
--sort-by
[Optional]
A list of semicolon-separated(;) fields to sort by. The fields must be part of the items
AQL domain. For more information read the AQL documentation
--sort-order
[Default: asc]
The order by which fields in the 'sort-by' option should be sorted. Accepts asc
or desc
.
--limit
[Optional] The maximum number of items to fetch. Usually used with the 'sort-by' option.
--offset
[Optional] The offset from which to fetch items (i.e. how many items should be skipped). Usually used with the 'sort-by' option.
--fail-no-op
[Default: false] Set to true if you'd like the command to return exit code 2 in case of no files are affected.
--archive-entries
[Optional] This option is no longer supported since version 7.90.5 of Artifactory. If specified, only archive artifacts containing entries matching this pattern are matched. You can use wildcards to specify multiple artifacts.
--threads
[Default: 3] Number of threads used for deleting the items.
--insecure-tls
[Default: false] Set to true to skip TLS certificates verification.
--retries
[Default: 3] Number of HTTP retry attempts.
--retry-wait-time
[Default: 0s] Number of seconds or milliseconds to wait between retries. The numeric value should either end with s for seconds or ms for milliseconds (for example: 10s or 100ms).
jf rt del frog-repo/rabbit/
jf rt del "frog-repo/rabbit/*.zip"
Command name
rt search
Abbreviation
rt s
Command arguments:
The command takes one argument which is the search path. In case the --spec option is used, the commands accept no arguments.
Search path
Specifies the search path in Artifactory, in the following format: [repository name]/[repository path].
You can use wildcards to specify multiple artifacts.
Command options:
When using the * or ; characters in the command options or arguments, make sure to wrap the whole options or arguments string in quotes (") to make sure the * or ; characters are not interpreted as literals.
--server-id
[Optional] Server ID configured using the 'jf config' command. If not specified, the default configured Artifactory server is used.
--spec
[Optional] Path to a file spec. For more details, please refer to Using File Specs.
--count
[Optional] Set to true to display only the total of files or folders found.
--include-dirs
[Default: false] Set to true if you'd like to also apply the source path pattern for directories and not only for files
--spec-vars
[Optional] List of semicolon-separated(;) variables in the form of "key1=value1;key2=value2;..." to be replaced in the File Spec. In the File Spec, the variables should be used as follows: ${key1}.
--props
[Optional] A list of Artifactory properties specified as semicolon-separated(;) "key=value" pairs (for example: "key1=value1;key2=value2;key3=value3"). Only artifacts with these properties names and values will be returned.
--exclude-props
[Optional] A list of Artifactory properties specified as semicolon-separated(;) "key=value" pairs (for example: "key1=value1;key2=value2;key3=value3"). Only artifacts without all of the specified properties names and values will be returned.
--build
[Optional] If specified, only artifacts of the specified build are matched. The property format is build-name/build-number. If you do not specify the build number, the artifacts are filtered by the latest build number.
--bundle
[Optional] If specified, only artifacts of the specified bundle are matched. The value format is bundle-name/bundle-version.
--recursive
[Default: true] Set to false if you do not wish to search artifacts inside sub-folders in Artifactory.
--exclusions
A list of semicolon-separated(;) exclude patterns. Allows using wildcards.
--sort-by
[Optional]
A list of semicolon-separated(;) fields to sort by. The fields must be part of the items
AQL domain. For more information read the AQL documentation
--sort-order
[Default: asc]
The order by which fields in the 'sort-by' option should be sorted. Accepts asc
or desc
.
--transitive
[Optional] Set to true to look for artifacts also in remote repositories. Available on Artifactory version 7.17.0 or higher.
--limit
[Optional] The maximum number of items to fetch. Usually used with the 'sort-by' option.
--offset
[Optional] The offset from which to fetch items (i.e. how many items should be skipped). Usually used with the 'sort-by' option.
--fail-no-op
[Default: false] Set to true if you'd like the command to return exit code 2 in case of no files are affected.
--archive-entries
[Optional] This option is no longer supported since version 7.90.5 of Artifactory. If specified, only archive artifacts containing entries matching this pattern are matched. You can use wildcards to specify multiple artifacts.
--insecure-tls
[Default: false] Set to true to skip TLS certificates verification.
--retries
[Default: 3] Number of HTTP retry attempts.
--retry-wait-time
[Default: 0s] Number of seconds or milliseconds to wait between retries. The numeric value should either end with s for seconds or ms for milliseconds (for example: 10s or 100ms).
--include
[Optional]
List of semicolon-separated(;) fields in the form of "value1;value2;...".
Only the path and the fields that are specified will be returned. The fields must be part of the items
AQL domain. for the full supported items list check AQL documentation
jf rt s frog-repo/rabbit/
jf rt s "frog-repo/rabbit/*.zip"
jf rt s example-repo-local --include="actual_md5;modified_by;updated;depth"
Command name
rt set-props
Abbreviation
rt sp
Command arguments:
The command takes two arguments, files pattern and files properties. In case the --spec option is used, the commands accept no arguments.
Files pattern
Files that match the pattern will be set with the specified properties.
Files properties
A list of semicolon-separated(;) key-values in the form of key1=value1;key2=value2,..., to be set on the matching files.
Command options:
When using the * or ; characters in the command options or arguments, make sure to wrap the whole options or arguments string in quotes (") to make sure the * or ; characters are not interpreted as literals.
--server-id
[Optional] Server ID configured using the 'jf config' command. If not specified, the default configured Artifactory server is used.
--spec
[Optional] Path to a file spec. For more details, please refer to Using File Specs.
--spec-vars
[Optional] List of semicolon-separated(;) variables in the form of "key1=value1;key2=value2;..." to be replaced in the File Spec. In the File Spec, the variables should be used as follows: ${key1}.
--props
[Optional] List of semicolon-separated(;) properties in the form of "key1=value1;key2=value2,...". Only files with these properties names and values are affected.
--exclude-props
[Optional] A list of Artifactory properties specified as semicolon-separated(;) "key=value" pairs (for example: "key1=value1;key2=value2;key3=value3"). Only artifacts without all of the specified properties names and values will be affected.
--recursive
[Default: true] When false, artifacts inside sub-folders in Artifactory will not be affected.
--build
[Optional] If specified, only artifacts of the specified build are matched. The property format is build-name/build-number. If you do not specify the build number, the artifacts are filtered by the latest build number.
--bundle
[Optional] If specified, only artifacts of the specified bundle are matched. The value format is bundle-name/bundle-version.
--include-dirs
[Default: false] When true, the properties will also be set on folders (and not just files) in Artifactory.
--fail-no-op
[Default: false] Set to true if you'd like the command to return exit code 2 in case of no files are affected.
--exclusions
A list of semicolon-separated(;) exclude patterns. Allows using wildcards.
--sort-by
[Optional]
A list of semicolon-separated(;) fields to sort by. The fields must be part of the items
AQL domain. For more information read the AQL documentation
--sort-order
[Default: asc]
The order by which fields in the 'sort-by' option should be sorted. Accepts asc
or desc
.
--limit
[Optional] The maximum number of items to fetch. Usually used with the 'sort-by' option.
--offset
[Optional] The offset from which to fetch items (i.e. how many items should be skipped). Usually used with the 'sort-by' option.
--archive-entries
[Optional] This option is no longer supported since version 7.90.5 of Artifactory. If specified, only archive artifacts containing entries matching this pattern are matched. You can use wildcards to specify multiple artifacts.
--insecure-tls
[Default: false] Set to true to skip TLS certificates verification.
--threads
[Default: 3] Number of working threads.
--retries
[Default: 3] Number of HTTP retry attempts.
--retry-wait-time
[Default: 0s] Number of seconds or milliseconds to wait between retries. The numeric value should either end with s for seconds or ms for milliseconds (for example: 10s or 100ms).
jf rt sp "generic-local/*.zip" "a=1;b=2,3"
jf rt sp "a=1;b=2,3" --spec my-spec
jf rt sp "maven-local/*.jar" "version=1.0.0;release=stable"
jf rt sp "environment=production;team=devops" --spec prod-spec
jf rt sp "devops-local/*.tar.gz" "build=102;branch=main"
Command name
rt delete-props
Abbreviation
rt delp
Command arguments:
The command takes two arguments, files pattern and properties list. In case the --spec option is used, the commands accept no arguments.
Files pattern
Specifies the files pattern in the following format: [repository name]/[repository path].
You can use wildcards to specify multiple repositories and files.
Properties list
A comma-separated(,) list of properties, in the form of key1,key2,..., to be deleted from the matching files.
Command options:
When using the * or ; characters in the command options or arguments, make sure to wrap the whole options or arguments string in quotes (") to make sure the * or ; characters are not interpreted as literals.
--server-id
[Optional] Artifactory Server ID configured using the 'jf config' command. If not specified, the default configured Artifactory server is used.
--props
[Optional] List of semicolon-separated(;) properties in the form of "key1=value1;key2=value2,...". Only files with these properties are affected.
--exclude-props
[Optional] List of semicolon-separated(;) Artifactory properties specified as "key=value" (for example: "key1=value1;key2=value2;key3=value3"). Only artifacts without all of the specified properties names and values will be affected.
--recursive
[Default: true] When false, artifacts inside sub-folders in Artifactory will not be affected.
--build
[Optional] If specified, only artifacts of the specified build are matched. The property format is build-name/build-number. If you do not specify the build number, the artifacts are filtered by the latest build number.
--bundle
[Optional] If specified, only artifacts of the specified bundle are matched. The value format is bundle-name/bundle-version.
--include-dirs
[Default: false] When true, the properties will also be set on folders (and not just files) in Artifactory.
--fail-no-op
[Default: false] Set to true if you'd like the command to return exit code 2 in case of no files are affected.
--exclusions
List of semicolon-separated(;) exclude patterns. Allows using wildcards.
--sort-by
[Optional]
A list of semicolon-separated(;) fields to sort by. The fields must be part of the items
AQL domain. For more information read the AQL documentation
--sort-order
[Default: asc]
The order by which fields in the 'sort-by' option should be sorted. Accepts asc
or desc
.
--limit
[Optional] The maximum number of items to fetch. Usually used with the 'sort-by' option.
--offset
[Optional] The offset from which to fetch items (i.e. how many items should be skipped). Usually used with the 'sort-by' option.
--archive-entries
[Optional] This option is no longer supported since version 7.90.5 of Artifactory. If specified, only archive artifacts containing entries matching this pattern are matched. You can use wildcards to specify multiple artifacts.
--insecure-tls
[Default: false] Set to true to skip TLS certificates verification.
--retries
[Default: 3] Number of HTTP retry attempts.
--retry-wait-time
[Default: 0s] Number of seconds or milliseconds to wait between retries. The numeric value should either end with s for seconds or ms for milliseconds (for example: 10s or 100ms).
jf rt delp "maven-local/*.jar" "version,release"
jf rt delp "devops-local/*.tar.gz" "build,branch"
jf rt delp "debian-repository/DEV*.deb" "status,phase,stage"
jf rt delp "centos-repo/tests/local/block.rpm" "environment"
jf rt delp "docker-hub/*" "component,layer,level"
JFrog CLI includes integration with Maven, allowing you to resolve dependencies and deploy build artifacts from and to Artifactory, while collecting build-info and storing it in Artifactory.
Before using the jf mvn command, the project needs to be pre-configured with the Artifactory server and repositories, to be used for building and publishing the project. The jf mvn-config command should be used once to add the configuration to the project. The command should run while inside the root directory of the project. The configuration is stored by the command in the .jfrog directory at the root directory of the project.
Command-name
mvn-config
Abbreviation
mvnc
Command options:
--global
[Optional] Set to true, if you'd like the configuration to be global (for all projects on the machine). Specific projects can override the global configuration.
--server-id-resolve
[Optional] Server ID for resolution. The server should configured using the 'jf rt c' command.
--server-id-deploy
[Optional] Server ID for deployment. The server should be configured using the 'jf rt c' command.
--repo-resolve-releases
[Optional] Resolution repository for release dependencies.
--repo-resolve-snapshots
[Optional] Resolution repository for snapshot dependencies.
--repo-deploy-releases
[Optional] Deployment repository for release artifacts.
--repo-deploy-snapshots
[Optional] Deployment repository for snapshot artifacts.
--include-patterns
[Optional] Filter deployed artifacts by setting a wildcard pattern that specifies which artifacts to include. You may provide multiple comma-separated(,) patterns followed by a white-space. For example artifact-.jar, artifact-.pom
--exclude-patterns
[Optional] Filter deployed artifacts by setting a wildcard pattern that specifies which artifacts to exclude. You may provide multiple comma-separated(,) followed by a white-space. For example artifact--test.jar, artifact--test.pom
--disable-snapshots
[Default:false] Set to true to disable snapshot resolution.
--snapshots-update-policy
[Optional] Set snapshot update policy. Defaults to daily.
Command arguments:
The command accepts no arguments
Before using jf mvn-config
, you must first configure your Artifactory server with JFrog CLI using the jf c add
command. For instance:
jf c add my-artifactory-server --url=[https://your-artifactory-url.jfrog.io](https://your-artifactory-url.jfrog.io) --user=your-user --password=your-password
Replace my-artifactory-server
with your desired server ID, and https://your-artifactory-url.jfrog.io
, your-user
, and your-password
with your actual Artifactory instance details.
Once your Artifactory server is configured, you can set your Maven repositories within your project's root directory:
Example 1: Setting resolution and deployment repositories for the current project
This is the most common use case, where you define the repositories directly for the project you are currently working in.
jf mvn-config \
--server-id-resolve=my-artifactory-server \
--repo-resolve-releases=maven-virtual-releases \
--repo-resolve-snapshots=maven-virtual-snapshots \
--server-id-deploy=my-artifactory-server \
--repo-deploy-releases=maven-releases-local \
--repo-deploy-snapshots=maven-snapshots-local
my-artifactory-server
: This should be the server ID you configured using jf c add
.
maven-virtual-releases
: Replace with the actual name of your Artifactory repository (e.g., libs-release
, a virtual repository aggregating release repos) for resolving release dependencies.
maven-virtual-snapshots
: Replace with the actual name of your Artifactory repository (e.g., libs-snapshot
, a virtual repository aggregating snapshot repos) for resolving snapshot dependencies.
maven-releases-local
: Replace with the actual name of your local Artifactory repository for deploying release artifacts.
maven-snapshots-local
: Replace with the actual name of your local Artifactory repository for deploying snapshot artifacts.
The mvn command triggers the maven client, while resolving dependencies and deploying artifacts from and to Artifactory.
Note: Before running the mvn command on a project for the first time, the project should be configured with the jf mvn-config command.
Note: If the machine running JFrog CLI has no access to the internet, make sure to read the Downloading the Maven and Gradle Extractor JARs section.
The following table lists the command arguments and flags:
Command-name
mvn
Abbreviation
mvn
Command options:
--threads
[Default: 3] Number of threads for uploading build artifacts.
--build-name
[Optional] Build name. For more details, please refer to .
--build-number
[Optional] Build number. For more details, please refer to .
--project
[Optional] JFrog project key.
--insecure-tls
[Default: false] Set to true to skip TLS certificates verification.
--scan
[Default: false] Set if you'd like all files to be scanned by Xray on the local file system prior to the upload, and skip the upload if any of the files are found vulnerable.
--format
[Default: table] Should be used with the --scan option. Defines the scan output format. Accepts table or json as values.
Command arguments:
The command accepts the same arguments and options as the mvn client.
The deployment to Artifacts is triggered both by the deployment and install phases. To disable artifacts deployment, add -Dartifactory.publish.artifacts=false to the list of goals and options. For example: "jf mvn clean install -Dartifactory.publish.artifacts=false"
Run clean and install with maven.
jf mvn clean install -f /path/to/pom.xml
JFrog CLI includes integration with Gradle, allowing you to resolve dependencies and deploy build artifacts from and to Artifactory, while collecting build-info and storing it in Artifactory.
Before using the gradle command, the project needs to be pre-configured with the Artifactory server and repositories, to be used for building and publishing the project. The gradle-config command should be used once to add the configuration to the project. The command should run while inside the root directory of the project. The configuration is stored by the command in the**.jfrog** directory at the root directory of the project.
Command-name
gradle-config
Abbreviation
gradlec
Command options:
--global
[Optional] Set to true, if you'd like the configuration to be global (for all projects on the machine). Specific projects can override the global configuration.
--server-id-resolve
[Optional] Server ID for resolution. The server should configured using the 'jf c add' command.
--server-id-deploy
[Optional] Server ID for deployment. The server should be configured using the 'jf c add' command.
--repo-resolve
[Optional] Repository for dependencies resolution.
--repo-deploy
[Optional] Repository for artifacts deployment.
--uses-plugin
[Default: false] Set to true if the Gradle Artifactory Plugin is already applied in the build script.
--use-wrapper
[Default: false] Set to true if you'd like to use the Gradle wrapper.
--deploy-maven-desc
[Default: true] Set to false if you do not wish to deploy Maven descriptors.
--deploy-ivy-desc
[Default: true] Set to false if you do not wish to deploy Ivy descriptors.
--ivy-desc-pattern
[Default: '[organization]/[module]/ivy-[revision].xml' Set the deployed Ivy descriptor pattern.
--ivy-artifacts-pattern
[Default: '[organization]/[module]/[revision]/[artifact]-.[ext]' Set the deployed Ivy artifacts pattern.
Command arguments:
The command accepts no arguments
The jf gradle command triggers the gradle client, while resolving dependencies and deploying artifacts from and to Artifactory.
Note: Before running the jf gradle command on a project for the first time, the project should be configured with the jf gradle-config command.
Note: If the machine running JFrog CLI has no access to the internet, make sure to read the Downloading the Maven and Gradle Extractor JARssection.
The following table lists the command arguments and flags:
Command-name
gradle
Abbreviation
gradle
Command options:
--threads
[Default: 3] Number of threads for uploading build artifacts.
--build-name
[Optional] Build name. For more details, please refer to .
--build-number
[Optional] Build number. For more details, please refer to .
--project
[Optional] JFrog project key.
--scan
[Default: false] Set if you'd like all files to be scanned by Xray on the local file system prior to the upload, and skip the upload if any of the files are found vulnerable.
--format
[Default: table] Should be used with the --scan option. Defines the scan output format. Accepts table or json as values.
Command arguments:
The command accepts the same arguments and options as the gradle client.
Build the project using the artifactoryPublish task, while resolving and deploying artifacts from and to Artifactory.
jf gradle clean artifactoryPublish -b path/to/build.gradle
For integrating with Maven and Gradle, JFrog CLI uses the build-info-extractor jars files. These jar files are downloaded by JFrog CLI from jcenter the first time they are needed.
If you're using JFrog CLI on a machine which has no access to the internet, you can configure JFrog CLI to download these jar files from an Artifactory instance. Here's how to configure Artifactory and JFrog CLI to download the jars files.
Create a remote Maven repository in Artifactory and name it extractors. When creating the repository, configure it to proxy https://releases.jfrog.io/artifactory/oss-release-local
Set the JFROG_CLI_EXTRACTORS_REMOTE environment variable with the server ID of the Artifactory server you configured, followed by a slash, and then the name of the repository you created. For example my-rt-server/extractors
JFrog CLI includes integration with MSBuild and Artifactory, allowing you to resolve dependencies and deploy build artifacts from and to Artifactory, while collecting build-info and storing it in Artifactory. This is done by having JFrog CLI in your search path and adding JFrog CLI commands to the MSBuild csproj
file.
For detailed instructions, please refer to our MSBuild Project Example on GitHub.
JFrog CLI provides full support for pulling and publishing docker images from and to Artifactory using the docker client running on the same machine. This allows you to collect build-info for your docker build and then publish it to Artifactory. You can also promote the pushed docker images from one repository to another in Artifactory.
To build and push your docker images to Artifactory, follow these steps:
Make sure Artifactory can be used as docker registry. Please refer to Getting Started with Docker and Artifactory in the JFrog Artifactory User Guide.
Make sure that the installed docker client has version 17.07.0-ce (2017-08-29) or above. To verify this, run docker -v**
To ensure that the docker client and your Artifactory docker registry are correctly configured to work together, run the following code snippet.
docker pull hello-world
docker tag hello-world:latest <artifactoryDockerRegistry>/hello-world:latest
docker login <artifactoryDockerRegistry>
docker push <artifactoryDockerRegistry>/hello-world:latest
If everything is configured correctly, pushing any image including the hello-world image should be successfully uploaded to Artifactory.
Note: When running the docker-pull and docker-push commands, the CLI will first attempt to log in to the docker registry. In case of a login failure, the command will not be executed.
Check out our docker project examples on GitHub.
Running jf docker pull command allows pulling docker images from Artifactory, while collecting the build-info and storing it locally, so that it can be later published to Artifactory, using the build-publish command.
The following table lists the command arguments and flags:
Command-name
docker pull
Abbreviation
dpl
Command options:
--server-id
[Optional] Server ID configured using the 'jf config' command. If not specified, the default configured Artifactory server is used.
--build-name
[Optional] Build name. For more details, please refer to .
--build-number
[Optional] Build number. For more details, please refer to .
--project
[Optional] JFrog project key.
--module
[Optional] Optional module name for the build-info.
--skip-login
[Default: false] Set to true if you'd like the command to skip performing docker login.
Command arguments:
The same arguments and options supported by the docker client/
The subsequent command utilizes the docker client to pull the 'my-docker-registry.io/my-docker-image:latest' image from Artifactory. This operation logs the image layers as dependencies of the local build-info identified by the build name 'my-build-name' and build number '7'. This local build-info can subsequently be released to Artifactory using the command 'jf rt bp my-build-name 7'.
jf docker pull my-docker-registry.io/my-docker-image:latest --build-name=my-build-name --build-number=7
You can then publish the build-info collected by the jf docker pull command to Artifactory using the build-publish command.
After building your image using the docker client, the jf docker push command pushes the image layers to Artifactory, while collecting the build-info and storing it locally, so that it can be later published to Artifactory, using the jf rt build-publish command.
The following table lists the command arguments and flags:
Command-name
docker push
Abbreviation
dp
Command options:
--server-id
[Optional] Server ID configured using the 'jf config' command. If not specified, the default configured Artifactory server is used.
--build-name
[Optional] Build name. For more details, please refer to .
--build-number
[Optional] Build number. For more details, please refer to .
--project
[Optional] JFrog project key.
--module
[Optional] Optional module name for the build-info.
--skip-login
[Default: false] Set to true if you'd like the command to skip performing docker login.
--threads
[Default: 3] Number of working threads.
--detailed-summary
[Default: false] Set true to include a list of the affected files as part of the command output summary.
Command arguments:
The same arguments and options supported by the docker client/
The subsequent command utilizes the docker client to push the 'my-docker-registry.io/my-docker-image:latest' image to Artifactory. This operation logs the image layers as artifacts of the local build-info identified by the build name 'my-build-name' and build number '7'. This local build-info can subsequently be released to Artifactory using the command 'jf rt bp my-build-name 7'.
jf docker push my-docker-registry.io/my-docker-image:latest --build-name=my-build-name --build-number=7
You can then publish the build-info collected by the docker-push command to Artifactory using the build-publish command.
Podman is a daemon-less container engine for developing, managing, and running OCI Containers. Running the podman-pull command allows pulling docker images from Artifactory using podman, while collecting the build-info and storing it locally, so that it can be later published to Artifactory, using the build-publish command.
The following table lists the command arguments and flags:
Command-name
rt podman-pull
Abbreviation
rt ppl
Command options:
--server-id
[Optional] Server ID configured using the 'jf config' command. If not specified, the default configured Artifactory server is used.
--build-name
[Optional] Build name. For more details, please refer to .
--build-number
[Optional] Build number. For more details, please refer to .
--project
[Optional] JFrog project key.
--module
[Optional] Optional module name for the build-info.
--skip-login
[Default: false] Set to true if you'd like the command to skip performing docker login.
Command argument
Image tag
The docker image tag to pull.
Source repository
Source repository in Artifactory.
In this example, podman is employed to pull the local image 'my-docker-registry.io/my-docker-image:latest' from the docker-local Artifactory repository. During this process, it registers the image layers as dependencies within a build-info identified by the build name 'my-build-name' and build number '7'. This build-info is initially established locally and must be subsequently published to Artifactory using the command 'jf rt build-publish my-build-name 7'.
jf rt podman-pull my-docker-registry.io/my-docker-image:latest docker-local --build-name my-build-name --build-number 7
You can then publish the build-info collected by the podman-pull command to Artifactory using the build-publish command.
Podman is a daemon-less container engine for developing, managing, and running OCI Containers. After building your image, the podman-push command pushes the image layers to Artifactory, while collecting the build-info and storing it locally, so that it can be later published to Artifactory, using the build-publish command.
The following table lists the command arguments and flags:
Command-name
rt podman-push
Abbreviation
rt pp
Command options:
--server-id
[Optional] Server ID configured using the 'jf config' command. If not specified, the default configured Artifactory server is used.
--build-name
[Optional] Build name. For more details, please refer to .
--build-number
[Optional] Build number. For more details, please refer to .
--project
[Optional] JFrog project key.
--module
[Optional] Optional module name for the build-info.
--skip-login
[Default: false] Set to true if you'd like the command to skip performing docker login.
--threads
[Default: 3] Number of working threads.
--detailed-summary
[Default: false] Set to true to include a list of the affected files as part of the command output summary.
Command argument
Image tag
The docker image tag to push.
Target repository
Target repository in Artifactory.
In this illustration, podman is employed to push the local image 'my-docker-registry.io/my-docker-image:latest' to the docker-local Artifactory repository. During this process, it registers the image layers as artifacts within a build-info identified by the build name 'my-build-name' and build number '7'. This build-info is initially established locally and must be subsequently published to Artifactory using the command 'jf rt build-publish my-build-name 7'.
jf rt podman-push my-docker-registry.io/my-docker-image:latest docker-local --build-name=my-build-name --build-number=7
You can then publish the build-info collected by the podman-push command to Artifactory using the build-publish command.
JFrog CLI allows pushing containers to Artifactory using Kaniko, while collecting build-info and storing it in Artifactory. For detailed instructions, please refer to our Kaniko project example on GitHub.
JFrog CLI allows pushing containers to Artifactory using buildx, while collecting build-info and storing it in Artifactory. For detailed instructions, please refer to our buildx project example on GitHub.
JFrog CLI allows pushing containers to Artifactory using the OpenShift CLI, while collecting build-info and storing it in Artifactory. For detailed instructions, please refer to our OpenShift build project example on GitHub.
The build-docker-create command allows adding a docker image, which is already published to Artifactory, into the build-info. This build-info can be later published to Artifactory, using the build-publish command.
Command-name
rt build-docker-create
Abbreviation
rt bdc
Command options:
--image-file
Path to a file which includes one line in the following format: IMAGE-TAG@sha256:MANIFEST-SHA256. For example: cat image-file-details superfrog-docker.jfrog.io/hello-frog@sha256:30f04e684493fb5ccc030969df6de0
--server-id
[Optional] Server ID configured using the 'jf config' command. If not specified, the default configured Artifactory server is used.
--build-name
[Optional] Build name. For more details, please refer to .
--build-number
[Optional] Build number. For more details, please refer to .
--project
[Optional] JFrog project key.
--module
[Optional] Optional module name for the build-info.
--threads
[Default: 3] Number of working threads.
Command argument
Target repository
The name of the repository to which the image was pushed.
Note: If your Docker image has multiple tags pointing to the same digest, you can provide them in a comma-separated format in the
--image-file
. All listed tags will be processed and added to the build-info individually.
In this example, a Docker image that has already been deployed to Artifactory is incorporated into a locally created, unpublished build-info identified by the build name myBuild
and build number '1'. This local build-info can subsequently be published to Artifactory using the command 'jf rt bp myBuild 1'.
jf rt bdc docker-local --image-file image-file-details --build-name myBuild --build-number 1
You can then publish the build-info collected by the podman-push command to Artifactory using the build-publish command.
Promotion is the action of moving or copying a group of artifacts from one repository to another, to support the artifacts' lifecycle. When it comes to docker images, there are two ways to promote a docker image which was pushed to Artifactory:
Create build-info for the docker image, and then promote the build using the jf rt build-promote command.
Use the jf rt docker-promote command as described below.
The following table lists the command arguments and flags:
Command-name
rt docker-promote
Abbreviation
rt dpr
Command options:
--server-id
[Optional] Server ID configured using the 'jf config' command. If not specified, the default configured Artifactory server is used.
--copy
[Default: false] If set true, the Docker image is copied to the target repository, otherwise it is moved.
--source-tag
[Optional] The tag name to promote.
--target-docker-image
[Optional] Docker target image name.
--target-tag
[Optional] The target tag to assign the image after promotion.
Command argument
source docker image
The docker image name to promote.
source repository
Source repository in Artifactory.
target repository
Target repository in Artifactory.
Promote the hello-world docker image from the docker-dev-local repository to the docker-staging-local repository.
jf rt docker-promote hello-world docker-dev-local docker-staging-local
Note: The jf rt docker-promote
command currently requires the source and target repositories to be different. It does not support promoting a Docker image to the same repository while assigning it a different target image name. If you need to perform this type of promotion, consider using the Artifactory REST API directly.
JFrog CLI provides full support for building npm packages using the npm client. This allows you to resolve npm dependencies, and publish your npm packages from and to Artifactory, while collecting build-info and storing it in Artifactory.
Follow these guidelines when building npm packages:
You can download npm packages from any npm repository type - local, remote or virtual, but you can only publish to a local or virtual Artifactory repository, containing local repositories. To publish to a virtual repository, you first need to set a default local repository. For more details, please refer to Deploying to a Virtual Repository.
When building npm packages, it is important to understand how the jf npm publish command handles publishing scripts. The behavior differs based on whether the --run-native flag is used:
Default Behavior (Without the --run-native flag): JFrog CLI runs the pack command in the background, which is followed by an upload action not based on the npm client's native publish command. Therefore, if your npm package includes prepublish or postpublish scripts, you must rename them to prepack and postpack respectively to ensure they are executed.
Behavior with the --run-native flag: When this flag is used, the command utilizes the native npm client's own publish lifecycle. In this mode, standard npm script names such as prepublish, publish, and postpublish are handled directly by npm itself, and no renaming is necessary.
Prerequisites
Npm client version 5.4.0 and above.
Artifactory version 5.5.2 and above.
Before using the jf npm install, jf npm ci, and jf npm publish commands, the project needs to be pre-configured with the Artifactory server and repositories for building and publishing. The configuration method depends on your workflow:
Standard JFrog CLI Configuration: The jf npm-config command should be used once to add the configuration to the project. This command should be run from the project's root directory and stores the configuration in the .jfrog directory.
Native Client Configuration (--run-native): When the --run-native flag is used, JFrog CLI bypasses the configuration in the .jfrog directory. Instead, it uses the user's own .npmrc file for all configurations, including authentication tokens and other settings.
Command-name
Abbreviation
Description
npm-config
Short form
npmc
`npmc`
Configures Artifactory server and repository details for npm builds within a project.
--server-id-resolve
[Optional] Artifactory server ID for dependency resolution (configured using `jf c add`).
--server-id-deploy
[Optional] Artifactory server ID for artifact deployment (configured using `jf c add`).
--repo-resolve
[Optional] Repository for resolving dependencies.
--repo-deploy
[Optional] Repository for deploying artifacts.
Command arguments
Accepts no arguments.
The jf npm install and jf npm ci commands execute npm's install and ci commands respectively, to fetch the npm dependencies from the npm repositories.
Commands Params
The following table lists the command arguments and flags:
Command-name
npm (covers install and ci subcommands typically, for example jf npm install, jf npm ci)
Abbreviation
(No specific abbreviation listed for jf npm, but for the underlying commands like npm i)
Command-name
npm
Abbreviation
Command options:
--build-name
[Optional] Build name. For more details, please refer to .
--build-number
[Optional] Build number. For more details, please refer to .
--project
[Optional] JFrog project key.
--module
[Optional] Optional module name for the build-info.
--threads
[Default: 3] Number of working threads for build-info collection.
Command arguments:
The command accepts the same arguments and options as the npm client.
--project
[Optional] JFrog project key.
--module
[Optional] Optional module name for the build-info.
Command arguments:
The command accepts the same arguments and options as the npm client.
Command options:
--run-native [Optional] [Default: false] Set to true to use the native npm client and the user's existing .npmrc configuration file. When this flag is active, JFrog CLI will not create its own temporary .npmrc file. All configurations, including authentication, must be handled by the user's .npmrc file.
Note
The "deployment view" and "details summary" features are not supported by the jf npm install and jf npm ci commands. This limitation applies regardless of whether the --run-native flag is used.
Example 1
The following example installs the dependencies and records them locally as part of build my-build-name/1. The build-info can later be published to Artifactory using the build-publish command. The dependencies are resolved from the Artifactory server and repository configured by npm-config command.
jf npm install --build-name=my-build-name --build-number=1
Example 2
The following example installs the dependencies. The dependencies are resolved from the Artifactory server and repository configured by npm-config command.
jf npm install
Example 3
The following example installs the dependencies using the npm-ci command. The dependencies are resolved from the Artifactory server and repository configured by npm-config command.
jf npm ci
Example 4
The following example installs dependencies using the native npm client, based on the .npmrc configuration.
jf npm install --run-native
Eg: jf npm install --run-native --build-name=my-native-build --build-number=1
The npm-publish command packs and deploys the npm package to the designated npm repository.
Before running the npm-publish command on a project for the first time, the project should be configured using the jf npm-config command. This configuration includes the Artifactory server and repository to which the package should deploy.
When using the --run-native flag, the jf npm-config command and the resulting .jfrog directory configuration are bypassed. Instead, JFrog CLI uses the native npm client , which relies exclusively on the user's .npmrc file for all configurations. Therefore, you must ensure your .npmrc file is correctly configured for publishing to the desired Artifactory repository, including all necessary repository URLs and authentication details.
Warning:
If your npm package includes the prepublish or postpublish scripts and you are not using the --run-native flag, please refer to the guidelines above (rename to prepack and postpack).
When using --run-native, standard npm script names are respected by the npm client.
The following table lists the command arguments and flags:
Command-name
npm publish
Abbreviation
Command options:
--build-name
[Optional] Build name. For more details, please refer to .
--build-number
[Optional] Build number. For more details, please refer to .
--project
[Optional] JFrog project key.
--module
[Optional] Optional module name for the build-info.
--detailed-summary
[Default: false] Set true to include a list of the affected files as part of the command output summary.
--scan
[Default: false] Set if you'd like all files to be scanned by Xray on the local file system prior to the upload, and skip the upload if any of the files are found vulnerable.
--format
[Default: table] Should be used with the --scan option. Defines the scan output format. Accepts table or JSON as values.
Command argument
The command accepts the same arguments and options that the npm pack command expects.
--run-native
[Optional] [Default: false] Set to true to use the native npm client for publishing. This allows leveraging all features and configurations specified in the user's .npmrc file.
Note:
Require a valid .npmrc file with appropriate configurations and authentication tokens.
Performance: Using this flag may result in performance degradation compared to the default JFrog CLI publish mechanism (which uses multi-threading).
Unsupported Features with this flag: "Deployment view" and "details summary" are not supported when this flag is used.
Command argument
The command accepts the same arguments and options that the npm pack command expects (when not using --run-native) or npm publish command expects (when using --run-native).
To pack and publish the npm package and also record it locally as part of build my-build-name/1, run the following command. The build-info can later be published to Artifactory using the build-publish command. The package is published to the Artifactory server and repository configured by npm-config command.
jf npm publish --build-name=my-build-name --build-number=1
Publishing an npm package using the native npm client and user's .npmrc.
jf npm publish --run-native
(Ensure your package.json and .npmrc are configured for publishing)
JFrog CLI provides full support for building npm packages using the yarn client. This allows you to resolve npm dependencies, while collecting build-info and storing it in Artifactory. You can download npm packages from any npm repository type - local, remote or virtual. Publishing the packages to a local npm repository is supported through the jf rt upload command.
Note: "Yarn versions from 2.4.0 up to, but not including, Yarn 4.x are supported. Yarn 4.x is currently not supported by JFrog CLI."
Before using the jf yarn command, the project needs to be pre-configured with the Artifactory server and repositories, to be used for building the project. The yarn-config command should be used once to add the configuration to the project. The command should run while inside the root directory of the project. The configuration is stored by the command in the .jfrog directory at the root directory of the project.
Command-name
yarn-config
Abbreviation
yarnc
Command options:
--global
[Optional] Set to true, if you'd like the configuration to be global (for all projects on the machine). Specific projects can override the global configuration.
--server-id-resolve
[Optional] Artifactory server ID for resolution. The server should configured using the 'jf c add' command.
--repo-resolve
[Optional] Repository for dependencies resolution.
Command arguments:
The command accepts no arguments
The jf yarn command executes the yarn client, to fetch the npm dependencies from the npm repositories.
Note: Before running the command on a project for the first time, the project should be configured using the jf yarn-config command.
The following table lists the command arguments and flags:
Command-name
yarn
Command options:
--build-name
[Optional] Build name. For more details, please refer to .
--build-number
[Optional] Build number. For more details, please refer to .
--project
[Optional] JFrog project key.
--module
[Optional] Optional module name for the build-info.
--threads
[Default: 3] Number of working threads for build-info collection.
Command arguments:
The command accepts the same arguments and options as the yarn client.
Example 1
The following example installs the dependencies and records them locally as part of build my-build-name/1. The build-info can later be published to Artifactory using the build-publish command. The dependencies are resolved from the Artifactory server and repository configured by **yarn-config command.
jf yarn install --build-name=my-build-name --build-number=1
Example 2
The following example installs the dependencies. The dependencies are resolved from the Artifactory server and repository configured by jf yarn-config command.
jf yarn install
JFrog CLI provides full support for building Go packages using the Go client. This allows resolving Go dependencies from and publish your Go packages to Artifactory, while collecting build-info and storing it in Artifactory.
JFrog CLI client version 1.20.0 and above.
Artifactory version 6.1.0 and above.
Go client version 1.11.0 and above.
To help you get started, you can use this sample project on GitHub.
Before you can use JFrog CLI to build your Go projects with Artifactory, you first need to set the resolutions and deployment repositories for the project.
Here's how you set the repositories.
'cd' into to the root of the Go project.
Run the jf go-config command.
Command-name
go-config
Abbreviation
Command options:
--global
[Default false] Set to true, if you'd like the configuration to be global (for all projects on the machine). Specific projects can override the global configuration.
--server-id-resolve
[Optional] Artifactory server ID for resolution. The server should configured using the 'jf c add' command.
--server-id-deploy
[Optional] Artifactory server ID for deployment. The server should be configured using the 'jf c add' command.
--repo-resolve
[Optional] Repository for dependencies resolution.
--repo-deploy
[Optional] Repository for artifacts deployment.
Example 1
Set repositories for this go project.
jf go-config
Example 2
Set repositories for all go projects on this machine.
jf go-config --global
The go command triggers the go client.
Note: Before running the go command on a project for the first time, the project should be configured using the jf go-config command.
The following table lists the command arguments and flags:
Command-name
go
Abbreviation
go
Command options:
--build-name
[Optional] Build name. For more details, please refer to .
--build-number
[Optional] Build number. For more details, please refer to .
--project
[Optional] JFrog project key.
--no-fallback
[Default false] Set to avoid downloading packages from the VCS, if they are missing in Artifactory.
--module
[Optional] Optional module name for the build-info.
Command arguments:
Go command
The command accepts the same arguments and options as the go client.
Example 1
The following example runs Go build command. The dependencies resolved from Artifactory via the go-virtual repository.
Note: Before using this example, please make sure to set repositories for the Go project using the go-config command.
jf go build
Example 2
The following example runs Go build command, while recording the build-info locally under build name my-build and build number 1. The build-info can later be published to Artifactory using the build-publishcommand.
Note: Before using this example, please make sure to set repositories for the Go project using the go-config command.
jf rt go build --build-name=my-build --build-number=1
The jf go-publish command packs and deploys the Go package to the designated Go repository in Artifactory.
Note: Before running the jf go-publish command on a project for the first time, the project should be configured using the jf go-config command.
The following table lists the command arguments and flags:
Command-name
go-publish
Abbreviation
gp
Command options:
--build-name
[Optional] Build name. For more details, please refer to .
--build-number
[Optional] Build number. For more details, please refer to .
--project
[Optional] JFrog project key.
--module
[Optional] Optional module name for the build-info.
--detailed-summary
[Default: false] Set true to include a list of the affected files as part of the command output summary.
Command argument
Version
The version of the Go project that is being published
Example 1
To pack and publish the Go package, run the following command. Before running this command on a project for the first time, the project should be configured using the jf go-config command.
jf gp v1.2.3
Example 2
To pack and publish the Go package and also record the build-info as part of build my-build-name/1 , run the following command. The build-info can later be published to Artifactory using the build-publish command. Before running this command on a project for the first time, the project should be configured using the jf go-config command.
jf gp v1.2.3 --build-name my-build-name --build-number 1
JFrog CLI provides full support for building Python packages using the pip and pipenv package managers, and deploying distributions using twine. This allows resolving python dependencies from Artifactory, using for pip and pipenv, while recording the downloaded packages. After installing and packaging the project, the distributions and wheels can be deployed to Artifactory using twine, while recording the uploaded packages. The downloaded packages are stored as dependencies in the build-info stored in Artifactory, while the uploaded ones are stored as artifacts.
To help you get started, you can use the sample projects on GitHub.
Before you can use JFrog CLI to build your Python projects with Artifactory, you first need to set the repository for the project.
Here's how you set the repositories.
'cd' into the root of the Python project.
Run the jf pip-config or jf pipenv-config commands, depending on whether you're using the pip or pipenv clients.
Commands Params
Command-name
pip-config / pipenv-config
Abbreviation
pipc / pipec
Command options:
--global
[Default false] Set to true, if you'd like the configuration to be global (for all projects on the machine). Specific projects can override the global configuration.
--server-id-resolve
[Optional] Artifactory server ID for resolution. The server should configured using the 'jf c add' command.
--repo-resolve
[Optional] Repository for dependencies resolution.
--server-id-deploy
[Optional] Artifactory server ID for deployment. The server should configured using the 'jf c add' command.
--repo-deploy
[Optional] Repository for artifacts deployment.
Examples
Example 1
Set repositories for this Python project when using the pip client (for pipenv: jf pipec
).
jf pipc
Example 2
Set repositories for all Python projects using the pip client on this machine (for pipenv: jf pipec --global
).
jf pipc --global
The jf pip install and jf pipenv install commands use the pip and pipenv clients respectively, to install the project dependencies from Artifactory. The jf pip install and jf pipenv install commands can also record these packages as build dependencies as part of the build-info published to Artifactory.
Note: Before running the pip install and pipenv install commands on a project for the first time, the project should be configured using the jf pip-config or jf pipenv-config commands respectively.
Recording all dependencies
JFrog CLI records the installed packages as build-info dependencies. The recorded dependencies are packages installed during the jf pip install and jf pipenv install command execution. When running the command inside a Python environment, which already has some of the packages installed, the installed packages will not be included as part of the build-info, because they were not originally installed by JFrog CLI. A warning message will be added to the log in this case.
How to include all packages in the build-info?
The details of all the installed packages are always cached by the jf pip install and jf pipenv install command in the .jfrog/projects/deps.cache.json file, located under the root of the project. JFrog CLI uses this cache for including previously installed packages in the build-info.
If the Python environment had some packages installed prior to the first execution of the install
command, those previously installed packages will be missing from the cache and therefore will not be included in the build-info.
Running the install
command with both the no-cache-dir and force-reinstall pip options, should re-download and install these packages, and they will therefore be included in the build-info and added to the cache. It is also recommended to run the command from inside a virtual environment.
Commands Params
Command-name
pip / pipenv
Abbreviation
Command options:
--build-name
[Optional] Build name. For more details, please refer to .
--build-number
[Optional] Build number. For more details, please refer to .
--project
[Optional] JFrog project key.
--module
[Optional] Optional module name for the build-info.
Command argument
The command accepts the same arguments and options as the pip / pipenv clients.
Examples
Example 1
The following command triggers pip install, while recording the build dependencies as part of build name my-build and build number 1 .
jf pip install . --build-name my-build --build-number 1
Example 2
The following command triggers pipenv install, while recording the build dependencies as part of build name my-build and build number 1 .
jf pipenv install . --build-name my-build --build-number 1
The jf twine upload command uses the twine, to publish the project distributions to Artifactory. The jf twine upload command can also record these packages as build artifacts as part of the build-info published to Artifactory.
Note: Before running the twine upload command on a project for the first time, the project should be configured using the jf pip-config or jf pipenv-config commands, with deployer configuration.
Commands Params
Command-name
twine
Abbreviation
Command options:
--build-name
[Optional] Build name. For more details, please refer to .
--build-number
[Optional] Build number. For more details, please refer to .
--project
[Optional] JFrog project key.
--module
[Optional] Optional module name for the build-info.
Command argument
The command accepts the arguments and options supported by twine client, except for repository configuration and authentication options.
Examples
Example 1
The following command triggers twine upload, while recording the build artifacts as part of build name my-build and build number 1 .
jf twine upload "dist/*" --build-name my-build --build-number 1
JFrog CLI provides partial support for building Python packages using the poetry package manager. This allows resolving python dependencies from Artifactory, but currently does NOT record downloaded packages as dependencies in the build-info.
Before you can use JFrog CLI to build your Python projects with Artifactory, you first need to set the repository for the project.
Here's how you set the repositories.
'cd' into the root of the Python project.
Run the jf poetry-config command as follows.
Commands Params
Command-name
poetry-config
Abbreviation
poc
Command options:
--global
[Default false] Set to true, if you'd like the configuration to be global (for all projects on the machine). Specific projects can override the global configuration.
--server-id-resolve
[Optional] Artifactory server ID for resolution. The server should configured using the 'jf c add' command.
--repo-resolve
[Optional] Repository for dependencies resolution.
Examples
Example 1
Set repositories for this Python project when using the poetry client.
jf poc
Example 2
Set repositories for all Python projects using the poetry client on this machine.
jf poc --global
The jf poetry install commands use the poetry client to install the project dependencies from Artifactory.
Note: Before running the poetry install command on a project for the first time, the project should be configured using the jf poetry-config command.
Commands Params
Command-name
poetry
Abbreviation
Command options:
--build-name
[Optional] Build name. For more details, please refer to .
--build-number
[Optional] Build number. For more details, please refer to .
--project
[Optional] JFrog project key.
--module
[Optional] Optional module name for the build-info.
Command argument
The command accepts the same arguments and options as the poetry clients.
Examples
Example 1
The following command triggers poetry install, while resolving dependencies from Artifactory.
jf poetry install .
JFrog CLI provides full support for restoring NuGet packages using the NuGet client or the .NET Core CLI. This allows you to resolve NuGet dependencies from and publish your NuGet packages to Artifactory, while collecting build-info and storing it in Artifactory.
NuGet dependencies resolution is supported by the jf nuget
command, which uses the NuGet client or the jf dotnet
command, which uses the .NET Core CLI.
To publish your NuGet packages to Artifactory, use the jf rt upload command.
Before using thenuget or dotnet commands, the project needs to be pre-configured with the Artifactory server and repository, to be used for building the project.
Before using the nuget or dotnet commands, the nuget-config or dotnet-config commands should be used respectively. These commands configure the project with the details of the Artifactory server and repository, to be used for the build. The nuget-config or dotnet-config commands should be executed while inside the root directory of the project. The configuration is stored by the command in the .jfrog directory at the root directory of the project. You then have the option of storing the .jfrog directory with the project sources, or creating this configuration after the sources are checked out.
The following table lists the commands' options:
Command-name
nuget-config / dotnet-config
Abbreviation
nugetc / dotnetc
Command options:
--global
[Optional] Set to true, if you'd like the configuration to be global (for all projects on the machine). Specific projects can override the global configuration.
--server-id-resolve
[Optional] Artifactory server ID for resolution. The server should configured using the 'jf c add' command.
--repo-resolve
[Optional] Repository for dependencies resolution.
--nuget-v2
[Default: false] Set to true if you'd like to use the NuGet V2 protocol when restoring packages from Artifactory (instead of NuGet V3).
Command arguments:
The command accepts no arguments
The nuget command runs the NuGet client and the dotnet command runs the **.NET Core CLI.
Before running the nuget command on a project for the first time, the project should be configured using the nuget-config command.
Before running the dotnet command on a project for the first time, the project should be configured using the dotnet-config command.
The following table lists the commands arguments and options:
Command-name
nuget / dotnet
Abbreviation
Command options:
--build-name
[Optional] Build name. For more details, please refer to .
--build-number
[Optional] Build number. For more details, please refer to .
--project
[Optional] JFrog project key.
--module
[Optional] Optional module name for the build-info.
Command argument
The command accepts the same arguments and options as the NuGet client / .NET Core CLI.
Example 1
Run nuget restore for the solution at the current directory, while resolving the NuGet dependencies from the pre-configured Artifactory repository. Use the NuGet client for this command
jf nuget restore
Example 2
Run dotnet restore for the solution at the current directory, while resolving the NuGet dependencies from the pre-configured Artifactory repository. Use the .NET Core CLI for this command
jf dotnet restore
Example 3
Run dotnet restore for the solution at the current directory, while resolving the NuGet dependencies from the pre-configured Artifactory repository.
jf dotnet restore --build-name=my-build-name --build-number=1
In addition, record the build-info as part of build my-build-name/1. The build-info can later be published to Artifactory using the build-publish command.
JFrog CLI supports packaging Terraform modules and publishing them to a Terraform repository in Artifactory using the jf terraform publish command.
We recommend using this example project on GitHub for an easy start up.
Before using the jf terraform publish command for the first time, you first need to configure the Terraform repository for your Terraform project. To do this, follow these steps:
'cd' into the root directory for your Terraform project.
Run the interactive jf terraform-config command and set deployment repository name.
The jf terraform-config command will store the repository name inside the .jfrog directory located in the current directory. You can also add the --global command option, if you prefer the repository configuration applies to all projects on the machine. In that case, the configuration will be saved in JFrog CLI's home directory.
The following table lists the command options:
Command-name
terraform-config
Abbreviation
tfc
Command options:
--global
[Optional] Set to true, if you'd like the configuration to be global (for all projects on the machine). Specific projects can override the global configuration.
--server-id-deploy
[Optional] Artifactory server ID for deployment. The server should configured using the 'jf c add' command.
--repo-deploy
[Optional] Repository for artifacts deployment.
Command arguments:
The command accepts no arguments
Example 1
Configuring the Terraform repository for a project, while inside the root directory of the project
jf tfc
Example 2
Configuring the Terraform repository for all projects on the machine
jf tfc --global
The terraform publish command creates a terraform package for the module in the current directory, and publishes it to the configured Terraform repository in Artifactory.
The following table lists the commands arguments and options:
Command-name
terraform publish
Abbreviation
tf p
Command options:
--namespace
[Mandatory] Terraform module namespace
--provider
[Mandatory] Terraform module provider
--tag
[Mandatory] Terraform module tag
--exclusions
[Optional] A list of semicolon-separated(;) exclude patterns wildcards. Paths inside the module matching one of the patterns are excluded from the deployed package.
--build-name
[Optional] Build name. For more details, please refer to .
--build-number
[Optional] Build number. For more details, please refer to .
--project
Command argument
The command accepts no arguments
Example 1
The command creates a package for the Terraform module in the current directory, and publishes it to the Terraform repository (configured by the jf tfc command) with the provides namespace, provider and tag.
jf tf p --namespace example --provider aws --tag v0.0.1
Example 2
The command creates a package for the Terraform module in the current directory, and publishes it to the Terraform repository (configured by the jf tfc command) with the provides namespace, provider and tag. The published package will not include the module paths which include either test or ignore .
jf tf p --namespace example --provider aws --tag v0.0.1 --exclusions "\*test\*;\*ignore\*"
Example 3
The command creates a package for the Terraform module in the current directory, and publishes it to the Terraform repository (configured by the jf tfc command) with the provides namespace, provider and tag. The published module will be recorded as an artifact of a build named my-build with build number 1. The jf rt bp command publishes the build to Artifactory.
jf tf p --namespace example --provider aws --tag v0.0.1 --build-name my-build --build-number 1
jf rt bp my-build 1