61   tweag.io

Nickel: Better Configuration for Less

Refreshing Comments...

I have a very hard time getting behind these complex configuration languages. To me what makes a configuration format good is the simplicity of reading the configuration of a program, and almost all of these languages are optimizing for feature complexity over readability. I think that all of the popular config formats (yaml, json, toml, etc) have issues, but none of the major issues with them have to do with being unable to represent a fibonacci sequence in their language.

To draw a direct comparison, when I look at the examples in the github repository, all I can think is "I would never want to have this be a source of truth in my codebase". While I get frustrated w/ whitespace in yaml and the difficulty of reading complex json configuration, if I need a way to programmatically load complex data I would almost always rather use those two as a base and write a 'config-loader' in the language that I am already using for my project (instead of introducing another syntax into the mix)

Conversely, I've been using quite a bit of Jsonnet in different projects for a few years now, and it's a life changer.

Here's a public example - using Jsonnet to parametrize all core resources of a bare metal Kubernetes cluster: [1]. This in turn uses cluster.libsonnet [2], which sets up Calico, Metallb, Rook, Nginx-Ingress-Controller, Cert-Manager, CoreDNS, ...

Note that this top-level file aims to be the _entire_ source of truth for that particular cluster. I know of people who are reusing some of the dependent lib/*libsonnet code in their own deployments, which shows that this is not just abstraction for the sake of abstraction.

Jsonnet isn't perfect, but it allows for actual building of abstraction layers in configuration, guaranteed pure evaluation, and not a single line of text templated or repeated YAML.

[1] - https://cs.hackerspace.pl/hscloud/-/blob/cluster/kube/k0.lib...

[2] - https://cs.hackerspace.pl/hscloud/-/blob/cluster/kube/cluste...

Does a "configuration language" specifically incorporate features for "overlaid" or "unified from parts" configuration?

Much like layered dockerfiles, mature configuration often comes from several places: env vars, configuration appropriate for checkin to git (no secrets), secrets configuration, and of course the old environment-specific configuration.

All of that merges to "The Configuration".

Also, these seem close to templating languages.

I've done this several times with a "stacked map" implementation (much like the JSP key lookups went through page / session / application scopes, or even more convoluted for Spring Webflow.

Answer for Jsonnet: layering/overrides from multiple sources: yes, as you define it (it's a programming language, the logic is yours to define, based on your particular usecase). But no access from environment variables, as that's inpure, and not really in scope for them.
I also have a hard time for the same reasons. I'm torn though; I want a config format that's super easy to read, and that I can easily change anywhere with nothing more than sed/vim/nano/notepad - but I also want to avoid typos and formatting problems.

I'm not sure which is the lesser evil.

Actually, I think (as always), that it depends. For something simple like a config file for an app, JSON/YAML is usually fine.

But for something more complex, like IaC (Infrastructure as Code) definitions, I think perhaps "proper" programming languages might be more beneficial. I had a look at Pulumi just yesterday, and I very much like the idea of writing a simple C#/Typescript app to deploy my, when compared to something like HCL (HashiCorp Configuration Language) or bash scripts that wrap the Azure/AWS CLI tooling.

Starlark is OK (it is very similar to Python except removed non-deterministic part). But the part where no type-hinting kicks in is when you try to read the actual underlying macros / functions the others provided. It is much harder to do without types (which Nickel seems would like to address).

Honestly, I would prefer anything that mimics popular languages to lower the bar of reading.

Have you tried Dhall? Static types and enough power to provide the tools you need, but deliberately not enough to allow full on arbitrary computation.

I've played with it briefly, along with the Kubernetes plugin, and it was a nice experience.

Configuration isn't code isn't data. Data belongs in a database. Code belongs in a codebase. Configuration doesn't. Ideally config should be reduced down to keys and values and stored anywhere where it's easy to push to the environment where the code runs. I don't understand the immense expansion and proliferation of the config layer. I never touch code anymore. All I handle is config and tooling around it. YAML engineering.

Better configuration doesn't mean more ways to treat config like code, or data like config, or god forbid, code. It means treating config like config and code like code. Gitops just makes me sad. Truth should only flow in one direction. The first time I had to write a script to utilize the GitHub API to auto-update a code repo I died a little inside.

So we try to keep these three areas separate, and config generally ends up in our deployment pipeline. The problem is what do you do when code changes necessitate config changes? Adding/removing config properties, etc. We don't want 50 developers messing around with production deployment pipelines.

Doesn't something like this go a little way toward solving that problem?

Manage the complexity, yeah. Solve it? Well, if you’re managing complexity, you’re not really solving it, are you?

Heavy lifting needs to be done with code. If your config layer is growing, I would look for why that is and how you could push the complexity to the code or data and “boil down” the config until it can be represented with just keys and values.

Growing config means there’s areas of your application that aren’t being properly encapsulated. But once something is enshrined as config then it usually never will get treated as a legitimate application concern, worthy of a data model and a UI for changing it. Devs just keep adding onto it and before you know it you need a whole team just to deal with it.

> I have a very hard time getting behind these complex configuration languages. To me what makes a configuration format good is the simplicity of reading the configuration of a program, and almost all of these languages are optimizing for feature complexity over readability. I think that all of the popular config formats (yaml, json, toml, etc) have issues, but none of the major issues with them have to do with being unable to represent a fibonacci sequence in their language.

Static languages like JSON and YAML are fine for toy configurations, but they don't scale to the most basic real-world configuration tasks. Consider any reasonably sized Kubernetes project that someone wants to make available for others to install in their cluster's. The project probably has thousands of lines of complex configuration but much of it will change subtly from one installation to another. Rather than distributing a copy of the configs and detailed instructions on how to manually configure the configuration for each use case, it becomes very naturally expedient to parameterize the configuration.

The most flat-footed solution involves text-based templates (a la jinja, mustache, etc) which is pretty much what Helm has done for a long time. But text-based templates are tremendously cumbersome (you have to make sure your templates always render syntactically correct and ideally also human readable, which is difficult because YAML is whitespace-sensitive and text templates aren't designed to make it easy to control whitespace).

A similarly naive solution is to simply encode a programming language into the YAML. Certain YAML forms encode references (e.g., `{"Ref": "<identifier>"}` is equivalent to dereferencing a variable in source code). Another program evaluates this implicit language at runtime. This is the CloudFormation approach, and it also gives you some crude reuse while leaving much to be desired.

After stumbling through a few of these silly permutations, it becomes evident that this reuse problem isn't different than the reuse problems that standard programming languages solve for; however, what is different is that we don't want our configuration to have access to system APIs including I/O and we may also want to prevent against non-halting programs (which is to say that we may not want our language to be turing complete). An expression-based configuration language becomes a natural fit.

After using an expression-based configuration language, you realize that it's pretty difficult to make sure that your JSON/YAML output has the right "shape" such that it will be accepted by Kubernetes or CloudFormation or whatever your target is, so you realize the need for static type annotations and a type checker.

Note that at no point are we trying to implement the fibonacci sequence, and in fact we prefer not to be able to implement it at all because we expressly prefer a language that is guaranteed to halt (though this isn't a requirement for all use cases, I believe it does satisfy the range of use-cases that we're discussing, and the principle of least power suggests that we should prefer it to turing-complete solutions).

The use case of those executable configuration languages is that you often need to set the same setting on different programs, maybe even on different machines, but they must all reflect the same decision in different ways (like your services server can set a port to listen, so your firewall must set that port as open for internal traffic, and your applications must set that port as their data source).

That said, this one language does not look powerful enough for that. So I'm not sure where it can be used.

> That said, this one language does not look powerful enough for that. So I'm not sure where it can be used.

I mean, it's used to configure all of NixOS, so I'm not sure if that's true.

Yeah... I honestly don't see the appeal of nickel.

It is sold as "You use this to generate configuration in other formats like JSON"... but why? Why would I want to use some language other than the target format to configure things? Why am I making my configuration a 2 step process? And even if I bought all of those reasons, why wouldn't I just use a general purpose language instead? Why have some esoteric language dialect whose only purpose is... making configuration files?

I'd much rather use Bash, python, perl, javascript, typescript, groovy, Java, kotlin, C++, C, Rust, erlang, php, awk, pascal, go, Nim, Nix, VB, Hax, coffeescript etc. Really, take your pick. Any well established language seems like a much better approach than something like this.

Here's a snippet for configuring a systemd timer on NixOS. Note that if I were to use the systemd configuration language, it would be spread across two files (the timer and the service itself)[1]. If I don't have "startAt" in the definition, it won't generate the timer file. If I spell it "statrAt" it will give me an error when I generate it (or in my editor if I have that configured). Note it's possible to fallback on using the json-like syntax to generate the ini-like systemd configuration files manually, which is useful to have when needed, but mostly it's about writing fairly simple functions that increase the signal-to-noise of the configuration file by removing boilerplate while at the same time detecting mistakes earlier.

  systemd.services.tarsnapback  = {
    startAt = "*-*-* 05:20:00";
    path = [ pkgs.coreutils ];
    environment = {
      HOME = "/home/XXXX";
    };
    script = ''${pkgs.tarsnap}/bin/tarsnap -c -f "$(uname -n)-$(date +%Y-%m-%d_%H-%M-%S)" "$HOME/ts" '';
    serviceConfig.User = "XXXX";
  };
1: Quick reference if you aren't familiar with systemd timers: https://wiki.archlinux.org/index.php/Systemd/Timers
I would 100% choose to write a systemd foo.timer file, and the foo.service file, and reference those.

You're throwing away all the organizational learning and preexisting systemd documentation, and forcing something different on the world. `man systemd.timer` contains no mention of `startAt`; what you have there is something inherently different from systemd.

And what if I want more complex rules, like a combination of intervals and time from boot?

> I would 100% choose to write a systemd foo.timer file, and the foo.service file, and reference those.

NixOS gives you this option, and I choose not to. Fortunately nobody is forcing you to use this (or forcing me to not use it).

> You're throwing away all the organizational learning and preexisting systemd documentation, and forcing something different on the world. `man systemd.timer` contains no mention of `startAt`

Not quite throwing it all away, because you can easily observe the output of this before making it live. Yes, systemd.timer contains no mention of startAt because as you correctly observed this is somethign inherently different from systemd. startAt is used by other configuration options to specify items running at specific calendar times, so it's reasonably consistent within nixOS itself.

To read the nix documentation is quite simple (and it shows the currently configured value for you):

  % nixos-option systemd.services.tarsnapback.startAt
    Value:
    [ "*-*-* 05:20:00" ]

    Default:
    [ ]

    Type:
    "string or list of strings"

    Example:
    "Sun 14:00:00"

    Description:
    ''
      Automatically start this unit at the given date/time, which
      must be in the format described in
      <citerefentry><refentrytitle>systemd.time</refentrytitle>
      <manvolnum>7</manvolnum></citerefentry>.  This is equivalent
      to adding a corresponding timer unit with
      <option>OnCalendar</option> set to the value given here.
    ''
> what you have there is something inherently different from systemd.

That's kind of the point. If it was inherently the same as systemd there would be no point to it. Systemd timers are quite boilerplate heavy (compare to e.g. a crontab entry), so when I'm not using nixos, I often end up copying an existing timer and modifying it.

> And what if I want more complex rules, like a combination of intervals and time from boot?

Add a time from boot of 120 seconds with this:

  systemd.timers.tarsnapBack.timerConfig = { OnBootSec = "120"; };
For things that actually use all the bells and whistles of systemd, you'll need to specify all the various details.

[edit]

For a nice hyperlinked searching of options see also:

https://search.nixos.org/options?query=startAt&from=0&size=3...

Three things to address:

1) This doesn't have to be a two-step process. Specialized tools like kubecfg for Jsonnet will directly take a Jsonnet top-level config and instantiate it, traverse the tree, and apply the configuration intelligently to your Kubernetes Cluster.

2) General purpose languages are at a disadvantage, because most of them are impure. Languages that limit all filesystem imports to be local to a repository and disallow any I/O ensure that you can safely instantiate configuration on CI hosts, in production programs, etc. The fact that languages like Jsonnet also ship as a single binary (or simple library) that requires no environment setup, etc. also make them super easy to integrate to any stack.

3) Configuration languages tend to be functional, lazily evaluated and declarative, vastly simplifying building abstractions that feel more in-line with your data. This allows for progressive building of abstraction, from just a raw data representation, through removal of repeated fields, to anything you could imagine makes sense for your application.

Related reading: https://landing.google.com/sre/workbook/chapters/configurati...

I don’t think they tend to be lazily evaluated (unless you mean “lazy” in some other way than I’m familiar with), but in general I agree.
Jsonnet, Nix and CUE are lazily evaluated. Starlark is not IIRC. Dhall I don't know, but I would presume it is?

Nix as an example:

  nix-repl> { foo = 5 / 0; bar = 5; }    
  error: division by zero, at (string):1:9

  nix-repl> { foo = 5 / 0; bar = 5; }.bar 
  5
vs. Python as an obvious example of a language with eager evaluation:

  >>> { "foo": 5 / 0, "bar": 5 }.bar
  Traceback (most recent call last):
    File "<stdin>", line 1, in <module>
  ZeroDivisionError: division by zero
This lazy evaluation allows for a very nice construct in Jsonnet:

  local widget = {
    id:: error "id must be set",
    name: "widget-%d" % [self.id],
    url: "https://factory.com/widget/%d" % [self.id],
  };
  widgetStandard: widget { id: 42 },
  widgetSpecial: widget { name: "foo"; url: "https://foo.com" },
When the resulting code only expects a widget to have a 'name' and 'url' field, you can either have both automatically defined based on a single to-level ID, or override them, even fully skipping the ID if not needed. (a :: in jsonnet is a hidden field, ie. one that will not be evaluated automatically when generating a YAML/JSON/..., but can be evaluated by other means).
JSON and YAML don’t offer any abstraction. If you want to describe kubernetes resources in such a way that you can deploy the same resources to many environments with subtle differences between them (e.g., namespace names, DNS names, etc) you want something to let you abstract so you aren’t manually trying to keep disparate copies of thousands of lines of frequently-changing config in sync.

The reason you don’t use regular languages for this task is because you want to enforce termination (programs can’t run forever without halting, allowing someone to DoS your system) or reproducibility (the config program doesn’t evaluate to different JSON depending on some outside state because the program did I/O). If your use case involves users who can be trusted not to violate these principles, then a standard programming language can work fine, but this frequently isn’t the case.

> you want to enforce termination

Nickel is turing complete. See, the fib example.

> or reproducibility

Nickel doesn't force reproducibility

So again, why Nickel and not a GP programming language?

Sorry, my reply, like throwaway's missed the main point of your original comment of "why not a GP programming language"

A configuration file is uniquely suited to a pure and lazy language.

Pure, because the all the advantages of a pure language remain, while none of the downsides; the result of evaluating the function is your configuration data. You don't need to do arbitrary I/O and ordering for generating configuration files.

Lazy because configuration files are naturally declarative, but you don't want to evaluate tons of things you have declared but then never used.

> Nickel is turing complete. See, the fib example.

I should have been more clear, I was listing potential reasons why you might not use a standard programming language. "Not wanting turing completeness" is a reason to use a non-turing-complete DSL. I wasn't suggesting that Nickel was appropriate for this particular use case, but many of the other languages in this category are (e.g., Starlark, Dhall).

> Nickel doesn't force reproducibility

Scanning the docs, I don't see anything about Nickel allowing I/O, so I believe you're mistaken.

https://github.com/tweag/nickel/blob/master/RATIONALE.md

> However, sometimes the situation does not fit in a rigid framework: as for Turing-completeness, there may be cases which mandates side-effects. An example is when writing Terraform configurations, some external values (an IP) used somewhere in the configuration may only be known once another part of the configuration has been evaluated and executed (deploying machines, in this context). Reading this IP is a side-effect, even if not called so in Terraform's terminology.

This is the relevant passage:

> Nickel permits side-effects, but they are heavily constrained: they must be commutative, a property which makes them not hurting parallelizability. They are extensible, meaning that third-party may define new effects and implement externally the associated effect handlers in order to customize Nickel for specific use-cases.

This answers your question about why Nickel is preferable to general purpose programming languages--the side-effects are more limited. Further, it reads to me like the "side-effects" are something that the owner of the runtime opts into by extending the sandbox with callables that can do side-effects as opposed to untrusted code being able to perform side-effects in any Nickel sandbox.

Hi, blog post author here. The idea behind effects in Nickel is to have very limited, use-case specific effects that can extend the standard interpreter. The goal is, as the example suggest, to make it able to interoperate with an external tool when absolutely necessary, such as Terraform or Nix. The idea is really not to have general effectful functions such as readFile or launchMissiles.
It's not clear from the docs whether any Nickel program can perform side-effects or if the Nickel interpreter must be extended to allow programs to perform side-effects (a la Starlark). Can you clarify this point?
The Nickel interpreter is intended to offer a mechanism to make it possible to extend it to add "effects", which is really just a pompous name for an external call. The idea is that, if you want to integrate it with Terraform for example, you would want to have a "getIp" effect to retrieve the IP of a machine once it has been evaluated. So you implement your external handler (say in Rust or whatever), and then you can call "getIp" from a Nickel program. Currently, we see no reason Nickel would ship with any effect by default. These are really just for extension purpose. Such additional effects would be required to be commutative in order not to hurt parallelizability, but you can't enforce that mechanically, so you'll have to trust the implementer.
This makes sense--allowing or preventing a particular instance of the interpreter to make side effects is perfectly reasonable. It would be concerning if all instances of the interpreter allowed client programs to make side effects.
Documentation is terribly lacking, but in addition examples are perplexing.

Where is the "Url" type, which should be a union of a constrained subset of "Str" (as built by mylib.makeURL) and a three component record (as used in the example)?

Where can I specify that the "urls" list in the configuration must be present and nonempty, and that "host" and "port" are mandatory (or not)? And that the type of "urls" is a list of "Url", the type of "port" is "port", etc.?

How can a general purpose port type have a meaningful default value, given that duplicates are generally fatal and a configuration can contain multiple ports? Only individual port usages (e.g. ftpPort and telnetPort in a commonNetworkServices record) should have defaults.

Where is the body of numToStr? Maybe in some unmentioned standard library?

Is there some kind of enforceable separation between the configuration file (untrusted and supplied by the user) and a schema it must satisfy (trusted and supplied by the application)? What forbids the configuration file to, say, redefine makeURL as Str -> Str -> Num -> Str -> Str -> Str (to add a HTTP path and query string) and make the application's URL parsing fail?

Hi, blog post author here. As stated at the beginning, the project is still WIP, and I didn't expect it to end on HN. Sorry for the lack of documentation and convincing example for now !

The post is intended to present the ideas and let people look at the repo, but while the core language is rather complete, a lot of small but important things are still missing to make it usable right now. I sidestepped this issue to still be able to illustrate some points with simple examples.

To answer your questions:

- numToStr would be either in a standard library, or implemented here using standard library functions. It does not exist currently

- the fact that the urls are non empty could be either enforced by the contract itself (ie use a NonEmptyUrl contract). Or you could combine a NonEmptyStr and a Url contract via merging. By default, things with a contract without a default must be present.

- I'm not sure I fully understand your last point, but I don't think a configuration file could "redefine" a function in any meaningful way. Say you write a program "main.ncl", which import some external downloaded source, check it against a contract you've written, and generate a config accordingly. Since the language is pure and variable are lexically scoped, nothing from an import could redefine anything in your current scope, be it your contracts or your functions.

I have been in a deep rabbithole with Nix lately. It seems almost too good to be true. Declarative configuration, and with NixOps, declarative provisioning as well. Along with the other tools in the ecosystem, there is quite some overlap with some of the popular tools from Hashicorps and others. I am just beginning to learn the intricacies of DevOps, and Nix seems like a one-stop shop. What are the downsides of Nix?
The Nix language is a big one, the lack of typing and comments which might help readers navigate Nix code, the arbitrary directory structure of nixpkgs, the very dynamic nature of derivations (if these keys are present on the derivation it will do one thing otherwise it will do very different things), the lack of documented escape hatches for things that you don’t care to make reproducible (in a perfect world, I would make my Python dependencies reproducible but the system package works well enough on my system and I can’t be bothered to write a recipe that builds some autotools-based C project with its own dependencies on obscure C projects scattered across the Internet, each with their own special snowflake build systems and their own dependency trees), etc.

Nix is definitely the right direction (it could completely solve the monorepo/multirepo problem and dramatically simplify CI/CD and otherwise significantly improve software development), but it’s not very pragmatic. It badly needs some product leadership.

When it’s hard to do something in Nix, it’s really hard. Let’s say you download a random binary which isn’t part of the Nix distribution — it is unlikely that you can run the binary; you may need to run ldd or nm to check the headers or construct the LD_LIBRARY_PATH manually. So many random build scripts assume certain things like cp exists in /bin/cp or use absolute paths (looking at you Python) that I tend to find I’m yak-shaving pretty often.

The documentation for Nix… is special. I’ve never seen any configuration need to explain the fixed-point [1] in order to do something which should be pretty simple.

On the other hand, when it works, it works brilliantly. I have projects where I want to fetch some random URL, or build some randomly generated C file; previously which I would have just said sod-it and checked it into the repository. Being able to share crafted environments ala direnv to colleagues and the continuous-integration is magical. Having per-project environment management which doesn’t require 3-4 tools like virtualenv, nvm, docker, docker-compose is really nice.

[1] https://nixos.org/guides/nix-pills/nixpkgs-overriding-packag...

Speaking to NixOps in practicular, I've used quite a bit at previous jobs, and still use it for my personal stuff.

It's very good and convenient (if you already know Nix), but it doesn't have a lot of maintainers so it's missing a lot of features compared to Terraform.

For a lot of people these features don't matter, but at companies over a certain size they're probably be considered table stakes:

* Not really a good story out of the box for teams, because the state file is just an SQLite database on your computer. Terraform has all sorts of helpful options for where to store its state file. There was some work in progress to improve this on NixOps last time I looked.

* Doesn't support AWS load balancers. There is support for Azure load balancers though, so maybe this will come in the future? You can setup a load balancer separately and point it at your NixOps deployed instance, but meh.

* Doesn't support AWS ECS/Fargate, which is a shame because that would be an excellent combination (building a customized NixOS docker image is very easy and space-efficient).

* Doesn't support authenticating with temporarily assumed AWS roles. AWS "best practice" encourages using these.

On the other hand, it does have one feature that Terraform doesn't: you can very easily build and deploy your code at the same time as your infrastructure. Whether it's actually a good idea to do so is up for debate, but it's certainly convenient.

It's also usually a bit faster than Terraform I find.

I've been using it for 5 years at this point and love it. Biggest downside is that it's so different, which confuses e.g. upstream. Second biggest downside is that getting things right in a declarative manner is a bit more upfront work.
Specific to NixOs, but running software that hasn't already been packaged for nix can be quite hard.

The language is not simple and discovery can be quite lacking.

TIL that there are people who like the nix language. I love NixOS, but hate the language.
I feel similarly. I will say that the lack of static typing compounds just about every other problem I have with Nix, so this seems like a marked improvement, but still it would be nice to have something that is more familiar and intuitive to programmers generally.
There is definitely a need for this class of tools (statically typed expression-based config languages), but it’s super weird to praise Nix as very simple and easy to understand and then to gripe that Dhall is complex.

I’ve been trying to learn Nix on and off for years and the language is often one of the things that trips me up. Dhall on the other hand looks simple even though it insists on a Haskell-esque syntax (which I find to be difficult to read).

All I really want is Starlark with type annotations or something similar, but Dhall is pretty close.

What is the selling point for this vs say Dhall, which looks superficially similar (Haskell-ey config lang)?

The examples for Nickel currently look a lot more like programming than config. Why not just use a general purpose programming language that you already know and has many libraries and mature tooling?

Dhall advertises a lack of Turing-completeness as a feature. One of the Nickel examples shows an implementation of fibonacci function... (I have no idea if this implies Nickel is TC, nor why I would need such power in my config)

> Dhall features a powerful type system that is able to type a wide range of idioms. But it is complex, requiring some experience to become fluent in.

> Gradual types also lets us keep the type system simple: even in statically typed code if you want to write a component that the type checker doesn’t know how to verify, you don’t have to type-check that part.

> Complementary to the static type system, Nickel offers contracts. Contracts offer precise and accurate dynamic type error reporting, even in the presence of function types.

It's not clear to me that gradual typing as a mix of statically typed code + untyped code with "contracts" is necessarily simpler than just one or the other.

It's not clear to me that a specialised config language combining code + schemas is necessarily better than an existing general purpose language + some language-agnostic schema DSL.

I don't know, maybe this is all great, I feel the benefits need to be more clearly explained and demonstrated though.

I see so many complain about these config languages. I think it is a bit black and white to think of a perfect separation between code and config. In a perfect world and easy situations yes. But I think you are all missing the point of these languages. In my eyes they are templating languages. Have any of you guys seen the Helm charts some of us have to write daily? Mustache coding is terrible! I would love any of these languages instead of that. The only thing worse would be to have to write all the yaml yourself every time instead of any templating at all.

To that end, you could use any programming language to generate the yaml for your config. But then some of these has some nice features. I really like that Dhall is only a total lang and it does some nice things with importing dhall files.

That is not to say you should unnecessarily complicate your config. Keep it easy and separate from code as much as possible.

I love using cap'n'proto to store configurations. It's really slick.

+ Type safe since you define the schema for every part of your configuration.

+ Ridiculously easy to read/understand (writing has a small ramp curve to build the muscle memory).

+ Type system is very flexible & rich (lists, maps, unions), supports generics (for built-in & custom types) & everything is neatly composable (including constants that reference constants).

+ The config compiles down to a minimal binary file.

+ You can have arbitrary configurations in 1 file (you compile a specific constant to a file so each configuration you want is just 1 constant value you define in the schema).

+ The binary file can be converted back to text using standard cap'n'proto tools so it's easy to double-check the config you're deploying.

+ Perfect backward/forward support for the configuration as long as you follow the rules (similar to protobuf/flatbuffers) since you have to define the schema for your constants.

+ Loading the config file from any data source (disk, network) is trivial and for on-disk usages you could mmap the struct to get even better performance (only the parts of the config you access would get paged in).

+ Supports a variety of languages. While the RPC stuff has a bit less adoption, the parts needed for configuration should be available in the most popular languages (I think the only missing language generator is Swift).

The only negatives are:

- You do have to define the schema for your config

- There's a single upfront cost to integrate cap'n'proto into your build system if you're not using CMake.

Neither feel like prohibitive negatives though. If you're looking at strong typing, why not go all the way & make sure that your entire config has a strongly typed structure to it? Not just that some field is an int also that there's certain specifically named fields & that accessing these fields in a strongly-typed languages will result in build errors if you forgot to change something.

> All in all, the Nix language is a lazy JSON with functions.

That's news to me!

I have been slowly learning the language, only enough to configure and package stuff as needed for my personal use, but it has not just been a lazy JSON with functions...

I don't want to interpret my configuration. I don't want it to act as a source code.

Many places at Google use text representation of protobuffers as config files and I like it a lot. It's typed. It's readable. The schema and text are source controlled. You have code generators for many languages.

(Work at Google, opinion is mine)

Text protos are decent (although a lack of a public spec limits their adoption outside of Google).

But that's just a serialization format, only a piece of the configuration puzzle. You may argue that all configuration should be fully static, but having gone from writing GCL/BCL at Google to having to deal with static, text-templated, copy-pasted YAMLs in the Real World - I want my GCL back. :)

Ironically, nickel was the name of the Google-internal, ill-fated replacement for GCL. I think it just lives on in blueprint files.

I also miss GCL, as quirky as it was. YAML composes poorly.