images – Docker https://www.docker.com Thu, 27 Oct 2022 20:22:30 +0000 en-US hourly 1 https://wordpress.org/?v=6.2.2 https://www.docker.com/wp-content/uploads/2023/04/cropped-Docker-favicon-32x32.png images – Docker https://www.docker.com 32 32 Resolve Vulnerabilities Sooner With Contextual Data https://www.docker.com/blog/resolve-vulnerabilities-sooner-with-contextual-data/ Tue, 25 Oct 2022 20:48:06 +0000 https://www.docker.com/?p=38342 OpenSSL 3.0.7 and “Text4Shell” might be the most recent critical vulnerabilities to plague your development team, but they won’t be the last. In 2021, critical vulnerabilities reached a record high. Attackers are even reusing their work, with over 50% of zero-day attacks this year being variants of previously-patched vulnerabilities

With each new security vulnerability, we’re forced to re-examine our current systems and processes. If you’re impacted by OpenSSL or Text4Shell (aka CVE-2022-42889), you’ve probably asked yourself, “Are we using Apache Commons Text (and where)?” or “Is it a vulnerable version?” — and similar questions. And if you’re packaging applications into container images and running those on cloud infrastructure, then a breakdown by image, deployment environment, and impacted Commons-Text version would be extremely useful. 

Developers need contextual data to help cut through the noise and answer these questions, but gathering information takes time and significantly impacts productivity. An entire day is derailed if developers must context switch and spend countless hours researching, triaging, and fixing these issues. So, how do we stop these disruptions and surface crucial data in a more accessible way for developers?

Start with continuously examining images

Bugs, misconfigurations, and vulnerabilities don’t stop once an image is pushed to production, and neither should development. Improving images is a continuous effort that requires a constant flow of information before, during, and after development.

Before images are used, teams spend a significant amount of time vetting and selecting them. That same amount of effort needs to be put into continuously inspecting those same images. Otherwise, you’ll find yourself in a reactive cycle of unnecessary rework, wasted time, and overall developer frustration.

That’s where contextual data comes in. Contextual data ties directly to the situation around it to give developers a broader understanding. As an example, contextual data for vulnerabilities gives you clear and precise insights to understand what the vulnerability is, how urgent it is, and its specific impact on the developer and the application architecture — whether local, staging, or production.

Contextual data reduces noise and helps the developer know the what and the where so they can prioritize making the correct changes in the most efficient way. What does contextual data look like? It can be…

  • A comparison of detected vulnerabilities between images built from a PR branch with the image version currently running in production
  • A comparison between images that use the same custom base image
  • An alert sent into a Slack channel that’s connected to a GitHub repository when a new critical or high CVE is detected in an image currently running in production
  • An alert or pull request to update to a newer version of your base image to remediate a certain CVE

Contextual data makes it faster for developers to locate and remediate the vulnerabilities in their application.

Use Docker to surface contextual data

Contextual data is about providing more information that’s relevant to developers in their daily tasks. How does it work?

Docker can index and analyze public and private images within your registries to provide insights about the quality of your images. For example, you can get open source package updates, receive alerts about new vulnerabilities as security researchers discover them, send updates to refresh outdated base images, and be informed about accidentally embedded secrets like access tokens. 

The screenshot below shows what appears to be a very common list of vulnerabilities on a select Docker image. But there’s a lot more data on this page that correlates to the image:

  • The page breaks the vulnerabilities up by layers and base images making it easy to assess where to apply a fix for a detected vulnerability.
  • Image refs in the right column highlight that this version of the image is currently running in production.
  • We also see that this image represents the current head commit in the corresponding Git repository and we can see which Dockerfile it was built from.
  • The current and potential other base images are listed for comparison.
Image CVE Report 1
An image report with a list of common CVEs — including Text4Shell

Using Slack, notifications are sent to the channels your team already uses. Below shows an alert sent into a Slack channel that’s configured to show activity for a selected set of Git repositories. Besides activity like commits, CI builds, and deployments, you can see the Text4Shell alert providing very concise and actionable information to developers collaborating in this channel:

Slack Text4Shell Update 2
Slack update on the critical Text4Shell vulnerability

You can also get suggestions to remediate certain categories of vulnerabilities and raise pull requests to update vulnerable packages like those in the following screenshot:

Text4Shell Remediation PR 1
Remediating the Text4Shell CVE via a PR and comparing to main branch

Find out more about this type of information for public images like Docker Official Images or Docker Verified Publisher images using our Image Vulnerability Database.

Vulnerability remediation is just the beginning

Contextual data is essential for faster resolution of vulnerabilities, but it’s more than that. With the right data at the right time, developers are able to work faster and spend their time innovating instead of drowning in security tickets.

Imagine you could assess your production images today to find out where you’re potentially going to be vulnerable. Your teams could have days or weeks to prepare to remediate the next critical vulnerability, like the OpenSSL forthcoming notification on a new critical CVE next Tuesday, November 1st 2022.

Docker DSO Debian Search 1
Searching for Debian OpenSSL on dso.docker.com

Interested in getting these types of insights and learning more about providing contextual data for happier, more productive devs? Sign up for our Early Access Program to harness these tools and provide invaluable feedback to help us improve our product!

]]>
Advanced Image Management in Docker Hub https://www.docker.com/blog/advanced-image-management-in-docker-hub/ Tue, 23 Mar 2021 16:19:32 +0000 https://www.docker.com/blog/?p=27772 We are excited to announce the latest feature for Docker Pro and Team users, our new Advanced Image Management Dashboard available on Docker Hub. The new dashboard provides developers with a new level of access to all of the content you have stored in Docker Hub providing you with more fine grained control over removing old content and exploring old versions of pushed images. 

advanced image management

Historically in Docker Hub we have had visibility into the latest version of a tag that a user has pushed, but what has been very hard to see or even understand is what happened to all of those old things that you pushed. When you push an image to Docker Hub you are pushing a manifest, a list of all of the layers of your image, and the layers themselves.

When you are updating an existing tag, only the new layers will be pushed along with the new manifest which references these layers. This new manifest will be given the tag you specify when you push, such as bengotch/simplewhale:latest. But this does not mean that all of those old manifests which point at the previous layers that made up your image are removed from Hub. These are still here, there is just no way to easily see them or to manage that content. You can in fact still use and reference these using the digest of the manifest if you know it. You can kind of think of this like your commit history (the old digests) to a particular branch (your tag) of your repo (your image repo!). 

image

This means you can have hundreds of old versions of images which your systems can still be pulling by hash rather than by the tag and you may be unaware which old versions are still in use. Along with this the only way until now to remove these old versions was to delete the entire repo and start again!

With the release of the image management dashboard we have provided a new GUI with all of this information available to you including whether those currently ‘untagged old manifests’ are still ‘active’ (have been pulled in the last month) or whether they are inactive. This combined with the new bulk delete for these objects and current tags provides you a more powerful tool for batch managing your content in Docker Hub. 

To get started you will find a new banner on your repos page if you have inactive images:

This will tell you how many images you have, tagged or old, which have not been pushed or pulled to in the last month. By clicking view you can go through to the new Advanced Image Management Dashboard to check out all your content, from here you can see what the tags of certain manifests used to be and use the multi-selector option to bulk delete these. 

For a full product tour check out our overview video of the feature below.

We hope that you are excited for the first step of us providing greater insight into your content on Docker Hub, if you want to get started exploring your content then all users can see how many inactive images they have and Pro & Team users can see which tags these used to be associated with, what the hashes of these are and start removing these today. To find out more about becoming a Pro or Team user check out this page.

]]>
Advanced Image Management in Docker Hub nonadult
Scaling Docker to Serve Millions More Developers: Network Egress https://www.docker.com/blog/scaling-docker-to-serve-millions-more-developers-network-egress/ Mon, 24 Aug 2020 16:01:00 +0000 https://www.docker.com/blog/?p=26965 In Part 1 of this blog we went into a deep dive that analyzed all of the images stored in Docker Hub, the world’s largest container registry. We did this to give you a better understanding of how our new Terms of Service updates will impact development teams who use Docker Hub to manage their container images and CI/CD pipelines.

Part 2 of this blog post takes a deep dive into rate limits for container image pulls. This was also announced as part of our updated Docker Terms of Service (ToS) communications. We detailed the following pull rate limits to Docker subscription plans that will take effect November 1, 2020:

  • Free plan – anonymous users: 100 pulls per 6 hours 
  • Free plan – authenticated users: 200 pulls per 6 hours
  • Pro plan – 50,000 pulls in a 24 hour period
  • Team plan – 50,000 pulls in a 24 hour period

Docker defines pull rate limits as the number of manifest requests to Docker Hub. Rate limits for Docker image pulls are based on the account type of the user requesting the image – not the account type of the image’s owner. For anonymous (unauthenticated) users, pull rates are limited based on the individual IP address. 

We’ve been getting questions from customers and the community regarding container image layers. We are not counting image layers as part of the pull rate limits. Because we are limiting on manifest requests, the number of layers (blob requests) related to a pull is unlimited at this time. This is a change based on community feedback in order to be more user-friendly, so users do not need to count layers on each image they may be using.

A deeper look at Docker Hub pull rates

In determining why rate limits were necessary and how to apply them, we spent considerable time analyzing image downloads from Docker Hub. What we found confirmed that the vast majority of Docker users pulled images at a rate you would expect for normal workflows. However, there is an outsized impact from a small number of anonymous users. For example, roughly 30% of all downloads on Hub come from only 1% of our anonymous users.

pasted image 0

The new pull limits are based on this analysis, such that most of our users will not be impacted. These limits are designed to accommodate normal use cases for developers – learning Docker, developing code, building images, and so forth.

Helping developers understand pull rate limits 

Now that we understood the impact and where the limits should land, we needed to define at a technical level how these limits should work. Limiting image pulls  to a Docker registry is complicated. You won’t find a pull API in the registry specification – it doesn’t exist. In fact, an image pull is actually a combination of manifest and blob API requests, and these are done in different patterns depending on the client state and the image in question. 

For example, if you already have the image, the Docker Engine client will issue a manifest request, realize it has all of the referenced layers based on the returned manifest, and stop. On the other hand, if you are pulling an image that supports multiple architectures, a manifest request will be issued and return a list of image manifests for each supported architecture. The Docker Engine will then issue another specific manifest request for the architecture it’s running on, and receive a list of all the layers in that image. Finally, it will request each layer (blob) it is missing.

So an image pull is actually one or two manifest requests, and zero to infinite blob (layer) requests. Historically, Docker monitored rate limits based on blobs (layers). This was because a blob most closely correlated with bandwidth usage. However, we listened to feedback from the community that this is difficult to track, leads to an inconsistent experience depending on how many layers the image you are pulling has, discourages good Dockerfile practices, and is not intuitive for users who just want to get stuff done without being experts on Docker images and registries.

As such, we are rate limiting based on manifest requests moving forward. This has the advantage of being more directly coupled with a pull, so it is easy for users to understand. There is a small tradeoff – if you pull an image you already have, this is still counted even if you don’t download the layers. Overall, we hope this method of rate limiting is both fair and user-friendly.

We welcome your feedback

We will be monitoring and adjusting these limits over time based on common use cases to make sure the limits are appropriate for each tier of user, and in particular, that we are never blocking developers from getting work done.

Stay tuned in the coming weeks for a blog post about configuring CI and production systems in light of these changes.

Finally, as part of Docker’s commitment to the open source community, before November 1 we will be announcing availability of new open source plans. To apply for an open source plan, please complete the short form here.

For more information regarding the recent terms of service changes, please refer to the FAQ.

For users that need higher image pull limits, Docker also offers 50,000 pulls in a 24 hour period as a feature of the Pro and Team plans. Visit www.docker.com/pricing to view the available plans.
As always, we welcome your questions and feedback at pricingquestions@docker.com.

]]>