68   contentful.com

Refreshing Comments...

I am the founder of a very similar project that supports both AWS Secrets Manager and Google Secrets Manager which actually predates this and GoDaddy's solutions[1].

The proliferation of these type of projects clearly shows the need for secret handling. While I think that more solutions for the same problem is not a bad thing, I also believe that we could benefit from a coordinated effort.

My colleagues are actively working with GoDaddy's maintainers to find a common way forward by standardizing the "ExternalSecret" CRD and eventually merging the projects[2].



This is why we need standard protocols and data formats for these common systems. If there were just a "secrets protocol" or "secrets data format", any program could just implement it for input and output, along with a standard interface to perform the actions (a single URL for example). It used to be commonplace to just write an RFC for what you were doing and then other people would use the RFC. Which wasn't perfect, but nothing ever is...

Instead, the standard solution today is custom integration, which leads to a lot of reinventing the wheel and incompatibility with extremely similar products.

I will run the risk of sounding condescending here, but I sense some negativity in your comment that I fail to find a justification for.

People need solutions to their problems and they develop them asynchronously and in isolation from each other. Turns out that some problems are more universal than others and could benefit from a common effort.

Suddenly, solutions collide and collaboration happens. What you describe is exactly what they are trying to do now: https://github.com/godaddy/kubernetes-external-secrets/pull/...

So, rejoice! The magic of open source and internet enabled collaboration is happening right before your eyes! :)

My negativity comes from this specific solution. My two main problems with it:

1. It's the exception, not the rule. Most of the time there aren't collaborative solutions. Most of the time people work in their silos and make something work for themselves. Often it's corporate-funded, and those corporations have no consideration for others being able to pick up the work once they've abandoned it. The fact that there's 10 of the same thing to integrate is proof enough. It's just gotten so ridiculous that somebody finally had to address it. Turns out it's a company that has to deal with all of those implementations anyway.

2. This is one tool figuring out how a bunch of other tools can integrate into it. It's like what I'm proposing, if you assume the least work possible to achieve maximum laziness for your own specific tool and use case.

Their solution wasn't "hey, it looks like a lot of things use secrets. maybe it'd be cool if we made one way for any system with secrets to interoperate with any other system, and try to get it adopted by existing vendors." Their solution was "hey, we just need to get all these other things to work for the one tool we're using." Rather than "how can we make this more composable, more compatible, easier to implement, for everybody... even if they're not using our tool...", it became "how can we make this easier for just us?"

That's what is always happening, has always been happening, in this space, for nearly 10 years. Either everybody latches their custom tech onto a single platform and no real open source solutions get made, or everybody spits out incompatible, overbuilt, underthought, opinionated solutions for their own problem. We don't build standard solutions in the cloud world, we build log cabins.

I mean, just read the PR. They're talking about coding into this 'standard' support for each different implementation. Like "this is a vault secret, and this is a gsm secret." That's the opposite of what I want. I want it to say, "this is what a secret looks like. now you figure out how to use it.", or, "give me your secret. I don't care who or what you are, because we all speak the same secret language." That is what an internet standard is supposed to be. Not "this is a bind9 DNS record, this is an AD DNS record, this is a RedHat DNS record, this is a Route53 DNS record".

The "container landscape" shouldn't remotely resemble what it does today. The idea of a "container" should have been standardized in an agnostic way, without requiring all the (admittedly useful, but also completely unnecessary, and often burdensome) features Docker threw into their tool. Yes, many people's lives were made easier. But a whole lot of other lives were made harder, to the point of small economies built out of badly re-implementing and custom-integrating into one precious and incompatible concept.

I would call Kubernetes the open source Active Directory, but Active Directory is literally ten times more standards compliant.

It's worth noting that the "Kubernetes External Secrets"[0] project from Godaddy is now supplanted by "Secret-Manager"[1].

I've been using Secret-Manager and it works very well.

The authors of "kube-secret-syncer" mention "[other solutions] lack either in security, caching or flexibility".

When it comes to "secret-manager", although I can not vouch for its security, the codebase is very small and probably easily auditable.

It's also very flexible. It supports "SecretStores", currently AWS, GCP and Vault out of the box, and it's easy to add more.

Not sure why "caching" is mentioned in the mix.

I'm surprised they decided to re-invent the wheel instead of improving secret-manager.

[0]: https://github.com/godaddy/kubernetes-external-secrets

[1]: https://github.com/itscontained/secret-manager

Why is it supplanted? Is it a fork? I still see commits on GoDaddy's repository.

Secret-Manager docs are, ahem, limited.

I could be wrong; I had originally started using "external-secrets" then I believe found about "secret-manager" from the Godaddy repository.

I've used both solutions, and ultimately, I think itscontained/secret-manager is better than external-secrets.

Their doc was re-jiggled a few days ago and I agree its made it look like it's inexistant. There's not a _ton_ of it, but it's there[0][1]



edit: found the link in the Godaddy repo to "secret-manager": https://github.com/godaddy/kubernetes-external-secrets/issue...

I was mistaken when I said "secret-manager supplanted external-secrets". It's a Golang rewrite from a user.

Hey there, I work on the infra team at Contentful and wanted to expand on the caching. Polling AWS Secrets Manager often can incur considerable costs since it is priced by API calls. We've tried to alleviate this by caching the list of secrets and their values in the process.
Kind of a different problem, but I've had really good experience with using Hashicorp's vault, which is excellent, paired with the vault-secrets-operator for kubernetes to do my secrets management. It will sync secrets from a vault path and create a kubernetes secret that you can use like any other secret. At least this way I feel like there's less lockin to a cloud provider (and some of the places I run this have on-prem kubernetes, so I have to have something that works outside of the cloud, and sometimes without internet).



The entire Kubernetes secret space is a bit immature with no standard solutions. Many of the larger solutions are vendor specific and don't solve the problem in a generic way, see AWS[1] or Vault[2][3].

I've been discussing the problem-space with the Godaddy External Secret maintainers and they seem a bit burnt-out. There is work on standardization here https://github.com/godaddy/kubernetes-external-secrets/pull/..., but this more covers creating Kubernetes Secrets from external sources, work still remains around a generic pod injector solution.

A few of us have started work on what the implementation of this would look like over at https://github.com/itscontained/secret-manager.

[1] https://github.com/mumoshu/aws-secret-operator

[2] https://github.com/hashicorp/vault-k8s

[3] https://banzaicloud.com/blog/inject-secrets-into-pods-vault-...

Just switch from kube to ECS already, if you're in AWS. Much better integration and support.
ECS is a joke. Implementing scaling policies is way too complicated, don't even get me to talk about scaling out with EC2-based capacity providers [1]. Combine that with the mediocre job cloudwatch does at collecting and allowing to act on realtime metrics, you end up with a solution where you still have to do everything yourself. And the tooling and integration is not on par with what the k8s ecosystem is sporting nowadays.

What people want is a cluster engine they shove a bit of text into and it does the rest. Integrated k8s does a job well enough with that, as does ECS. The difference is, k8s knowledge is transferrable to a growing number of businesses, ECS knowledge is not. Something to keep in mind if you're looking for employment in that area.

k8s also has the beauty of being able to exist completely disjointed from AWS infrastructure. Nothing better than being able to lock in clueless product teams -- not every dev is interested in that whole "service ownership" idea -- into their clusters and let them bear the responsibility. ECS is way too tightly integrated with AWS, and requires changing a lot more infrastructure. This is not easy to allow in an org where the NOC assigns VPC and general network layout.

--- [1] Fargate still does not support CMK encrypted ephemeral volumes https://github.com/aws/containers-roadmap/issues/915. From which you can tell, nobody who is spending serious money with AWS is using ECS too much based on this, as they would have hopped on implementing it by now. And using AWS integrated containers without fargate is pointless IMO, as that's exactly the kind of service you'd be looking to get from paying extra for big cloud.

Really? I'd found even simple things like volume mounted secrets a pain to use.


Secrets to be fetched by CFN of the Service. "define a config file as a "volume" and mount into the container" is very unusual. Store it in S3, and give your Task's IAM Role permission to fetch it.
> define a config file as a "volume" and mount into the container

That is how kubernetes secrets work so I wouldn't call it unusual

And so do so many container based applications that expect a secrets to be present in a file.

Kubernetes provides an easy-to-use abstraction for the same, which ECS does not.

> which ECS does not.

It actually does. You may, if you wish, have a volume and mount it is ECS tasks [0][1]. The issue above does not seem legit.

[0] https://docs.aws.amazon.com/AmazonECS/latest/developerguide/...

[1] https://docs.aws.amazon.com/AWSCloudFormation/latest/UserGui...

none of options are easy to use for a secret or a simple configuration file. If I have a configuration file, I can easily mount it with --volume option with docker run. But to get the same on ECS - requires a much more complex setup than what is needed for k8s. Why do I need EFS/EBS volumes? Why doesn't this work well with Secret Manager or Parameter Store?

Yes, k8s is a complex beast - but ECS isn't as clean as it looks.

> Why doesn't this work well with Secret Manager or Parameter Store?

Make a Parameter that reads from Secret Manager or Parameter Store in the Cloudformation template of your ECS Service, and pass the value to TaskDefinition as an environment variable. No need for volumes at all.

One thing I do like in ECS is that you can specify an environment variable to be fetched from parameter store or KMS, without seeing it. A convenient balance.