Skip to main content

· 12 min read
Pete Hodgson

While OpenFeature initially focused on support for server-side feature flagging, we know that a lot of feature-flagging (likely the majority) happens on the client - mobile apps and frontend web apps. As such, we're currently finalizing a proposal which extends the OpenFeature spec to support client-side use cases. By the way, if you're working on a feature flagging framework, whether it's commercial, open-source, or internal product, the folks at OpenFeature would love to hear more about how you approach client-side flagging.

In this post I'll summarize those changes, but to understand them in context we'll first talk about what makes client-side feature flagging different before diving into how that will impact the OpenFeature APIs.

Context is King: why client-side flagging is different

Static evaluation context

The key distinction between the client- and server-side feature flagging is the difference in how often evaluation context changes. In the typical server-side scenario - a service responding to http requests - the context for a feature flagging decision might change completely with every incoming request. Each new request is coming from a new user, and a lot of the evaluation context which affects a feature flagging decision is based on the user making the request.

In contrast, with a client-side app all feature flagging decisions are made in the context of the same user - the user interacting with the client-side app - and so the evaluation context is relatively static. There are cases where evaluation context will change within a client-side app - when a user logs in, for example - but by and large with client-side code we can treat feature flag evaluation context as something that is fixed (while still providing mechanisms to update it).

The network is slow

With server-side flags, we can assume that evaluating a feature flag is a relatively fast operation. With some systems the flagging rulesets1 live right next to where a flagging decision is needed, with flag evaluation happening either within the same process or in some sort of sidecar process. In this local evaluation model every flagging decision is a very fast operation. For frameworks that use a remote evaluation model, a flagging decision is still just a quick service call - akin to making a DB query or calling a remote cache.

This situation is quite different with client-side flags. Remote flag evaluation now requires a trip across the internet, and we have to anticipate such a service call to be slow, particularly if our users are behind a spotty internet connection. In fact, with a native mobile app we have to handle a fully disconnected client. Even with the local evaluation model, we still have to deal with the fact that the source of truth for our flagging ruleset is on the other side of a potentially high-latency network.

Eager evaluation for remote-evaluated systems

We can see that the network presents challenges for client-side apps using a remote evaluation model for flagging decisions. But we've also seen that the inputs into that flag evaluation - the evaluation context - are fairly static for client-side apps, and that means the results of flag evaluation are fairly static too.

How do we handle an expensive operation with fairly static results? We add caching! And that's what many client-side feature flagging frameworks do.

Specifically, when the app starts the flagging framework requests an eager evaluation of all the feature flagging decisions that might be needed, and then caches those decisions. Then whenever client-side code needs to make a flagging decision the framework simply returns the pre-evaluated result from its local cache.

sequenceDiagram actor app participant client as feature flagging logic participant cache participant provider as remote flag service app->>+client: userLoggedIn(userId) client->>+provider: evaluateFlags(evaluationContext) provider-->>-client: flag values client->>+cache: store(flag values) cache->>-client: . %% deactivate cache client->>-app: . %% deactivate client note right of app: some time later.. app->>+client: operation that needs a flagging decision client->>+cache: getFlagValue(flagKey) note right of cache: no call to flagging service needed cache->>-client: previously evaluated value client->>-app: .

Put another way, with client-side feature flagging we can separate flag evaluation - passing an evaluation context through a set of rules in order to determine a flagging decision - from flag resolution - getting the flagging decision for a specific feature flag.

Keeping rulesets in sync for local-evaluated systems

Flagging frameworks that use a local evaluation model don't have to contend with network calls for every evaluation, but they still need to keep their local rulesets up to date and allow the client-side app to respond to changes in those ruleset. Again, this means using caches to keep the most recent ruleset available. It also means that the flagging framework may have an eventing or callback mechanism to inform application code that the ruleset has changed and that current flag values may be stale.

sequenceDiagram actor app participant sdk as feature flag SDK participant cache as local ruleset participant provider as remote flag service app->>+sdk: boot sdk->>+provider: get ruleset from last known state provider-->>-sdk: ruleset sdk->>+cache: Store ruleset cache->>-sdk: . sdk->>-app: . app->>+sdk: operation that needs a flagging decision sdk->>+cache: flag evaluation cache->>-sdk: locally evaluated flag value note right of cache: no call to flagging service needed sdk->>-app: flag value

Client-side support in OpenFeature

With OpenFeature we have been thinking about how to support these key differences between client-side and server-side feature flagging. We have come to refer to these differences as two paradigms: dynamic context (for server-side flags) and static context (for client-side flags).

OpenFeature's current Evaluation API supports the dynamic paradigm quite nicely, but to support the static paradigm (and thus client-side flagging) we need to add a second flavor of the Evaluation API.

Server-side evaluation today

Let's compare and contrast. A typical server-side flagging decision using OpenFeature's current SDK might look something like this:

@GetMapping("/hello")
public String getSalutation() {
final Client client = openFeatureAPI.getClient();
final evalContext:EvaluationContext = evalContextForCurrentRequest();

if (client.getBooleanValue("use-formal-salutation", false, evalContext)) {
return "Good day to you!";
}else{
return "Hey, what's up?";
}
}

You can see that we're passing evaluation context every time we need to make a flagging decision.

Client-side evaluation tomorrow

With the currently proposed OpenFeature changes, a client-side flagging decision would look more like this:

public string generateSalutation(){
if (client.getBooleanValue("use-formal-salutation", false)) {
return "Good day to you!";
}else{
return "Hey, what's up?";
}
}

Note that we are no longer passing evaluation context when requesting a flagging decision.

However OpenFeature does still need to take evaluation context into account, and our app still needs to make sure that OpenFeature has an accurate view of the current context. What does that look like?

We can imagine a client-side app where the evaluation context only changes when a user logs in or out. Let's say this app has an onAuthenticated(...) handler which fires whenever that happens. We can use that handler to make sure that the evaluation context used for subsequent feature flagging decision is up-to-date:

// called whenever a user logs in (or out)
public void onAuthenticated(userId:String){
OpenFeatureAPI api = OpenFeatureAPI.getInstance();
api.setEvaluationContext(new MutableContext().add("targetingKey", userId));
}

This call to update the evaluation context can prompt OpenFeature's underlying flagging provider to update any cached feature flag values using the new evaluation context. The provider will be notified that the evaluation context has changed via a new onContextSet handler which is being added to the OpenFeature provider interface:

class MyFlaggingProvider implements Provider {
// triggered when `setEvaluationContext` is called
onContextSet(EvaluationContext oldContext, EvaluationContext newContext): void {
// here the provider can re-evaluate flags using the next evaluation context, updating any
// previously cached flag values
}
//...
}
sequenceDiagram actor user participant app participant of as OpenFeature SDK participant provider as OpenFeature Provider participant backend as provider backend user->>app: logs in app->>+app: onAuthenticated(userId) app->>of: setEvaluationContext({targetingKey:userId}) of->>+provider: onContextSet(oldContext,newContext) deactivate app provider->>+backend: evaluateFlags(newContext) backend->>-provider: flag values provider->>provider: update cache with new values deactivate provider

Javascript niceties

JavaScript is the most common runtime for client-side flagging, and it comes with some peculiarities which we wanted to accommodate as we designed the OpenFeature API.

In order to align the OpenFeature API with most existing feature flagging providers and to play nicely with frontend frameworks, the static context flavor of the JavaScript Evaluation API will be a synchronous call:

function Salutation() {
const useFormalSalutation = client.getBooleanValue('use-formal-salutation', false);
if (formalSalutation) {
return <blink>Good day!</blink>;
} else {
return <blink>What's up!</blink>;
}
}

contrast this with how a server-side flagging decision would be implemented, using the dynamic context flavor of the Evaluation API:

const client = OpenFeature.getClient();

app.get('/hello', async (req, res) => {
const evalContext = evaluationContextForRequest(req);
const formalSalutation = await client.getBooleanValue('use-formal-salutation', false, evalContext);
if (formalSalutation) {
res.send('Good day!');
} else {
res.send("What's up!");
}
});

Note that getBooleanValue is async - it returns a promise. That's pretty much unavoidable. This dynamic flavor of the evaluation API expects a different evaluation context every time it's called, and that means it can't be synchronous, because some feature flagging implementations will need to use asynchronous mechanisms such as networks calls in order to perform a flag evaluation with that new context.

However in the earlier client-side example we are in the static context paradigm which means that the evaluation context has been provided ahead of time, so we can pretty safely assume that the flagging decision can be made synchronously by pulling from a local cache.

Non-authoritative results

This synchronous API also lines up better with the the way rendering works in client-side web frameworks like React and Vue - via a synchronous call. However, with a synchronous API we have a potential for a race condition. What happens if our rendering logic asks for a flagging decision before a pre-evaluation operation has completed? We can't block, and so will have to return some sort of non-authoritative decision - a default value, or a previously evaluated value, or perhaps a null value.

client.setContext(updatedEvalContext);

// this result will NOT be based on the evaluation context we
// provided immediately above - our feature flagging framework
// won't have had a chance to actually send this new context
// anywhere
const result = client.getBooleanValue('some-flag', true);

This potential for stale or non-authoritative results is the price we pay for using a synchronous API.

When we have received a non-authoritative answer we can pass it on to our rendering logic, but we also need some way to know when an authoritative value is available, so that we can trigger a re-render using that new value. OpenFeature will provide a mechanism for that in the form of Provider Events. Here's how you might use provider events as part of a React hook:

client.addHandler(ProviderEvents.FlagValuesChanged, () => {
// this would trigger a re-render
setUseFormalSalutation(client.getBooleanValue('use-formal-salutation', false));
});

A multi-paradigm SDK

To recap, OpenFeature is planning to support client-side feature flagging by introducing the concept of two flagging paradigms: dynamic context (for server-side flagging) and static context (for client-side flagging). The OpenFeature APIs will have two slightly different flavors, depending upon which paradigm they support.

This raises a question around distribution - how will we distribute these two flavors of API for languages such as JavaScript and Java which support both paradigms. Our intention in those situations is to distribute two distinct packages - for example with JavaScript we will likely have a @openfeature/js-node-sdk package and a @openfeature/js-browser-sdk package. We are opting for this approach rather that distributing a single "universal", multi-paradigm package contain both APIs because we think this will be less confusing for developers getting started with OpenFeature. But under the covers the two OpenFeature packages will share a lot of common code.

If a feature flagging provider wants to provide support for both paradigms they will need to provide two distinct implementations of the Provider API. These implementations could be shipped as part of a "universal" multi-paradigm, or as separate packages - that's a choice for a provider to make.

In summary

We've seen that the difference between feature flagging on the client vs on the server comes down to the difference between two paradigms of evaluation context - static context and dynamic context, respectively. These two paradigms are different enough that they warrant two different flavors of API within OpenFeature, with the primary difference between whether highly-dynamic evaluation context is provided to the flagging framework at the point a flagging decision is requested, or whether mostly-static evaluation context is updated out-of-band when it changes, and ahead of the time when flagging decisions being made.

Getting better support in OpenFeature for client-side flags comes down to supporting both of these paradigms, and when you lay it all out adding that support doesn't have too huge an impact on the OpenFeature API. As we just discussed, it means adding a different flavor of the Evaluation API. In addition a few extra points of contact are added to the API surface so an app can tell the framework that the evaluation context has changed, and so the framework can in turn tell the app once new flag values are available.

With these additions in place, OpenFeature should have everything needed to bring client-side apps an open, vendor-agnostic standard for feature flagging.

Revisions

Initally published Feburary 13 2023.

Updated Feburary 18 2023 to include discussion of local vs remote flag evaluation models.


  1. by "rulesets" I mean the set of feature flagging rule configurations that define how flagging decisions should be made for each feature flag: "enable red_checkout_button flag for 50% of users", "only enable new_reco_algorithm for users in the 'internal_testers' group", etc. Rulesets plus evaluation context are the two inputs that fully define the output for any flagging decision.

· 2 min read
Todd Baert

Early this year, OpenFeature announced its intent to bring a standard to the rapidly growing development practice of feature flagging. In June it was accepted as a Cloud Native Computing Foundation Sandbox project. Now, we're pleased to announce a new milestone: OpenFeature has released 1.0 versions of its .NET, Go, Java, and JavaScript SDKs!

The release includes stable versions the following features:

  • the Evaluation API, providing application authors with consistent, vendor neutral, feature flag evaluation
  • provider interfaces for flexible integration with a variety of feature flag systems

The specification documents associated with these features have been marked as hardening, meaning breaking changes are no longer allowed and usage of these features are encouraged in production environments. The release of these SDKs and the stabilization of the specification represent a culmination of efforts by a dedicated group of vendors, practitioners and subject matter experts. Providers are already available for major vendors and popular community projects. It's our hope that the efforts to stabilize the OpenFeature specification and SDKs will lead to more adoption of both OpenFeature and feature flagging in general, and promote a vibrant ecosystem around this increasingly important development pattern.

In addition to those mentioned above, experimental features available in the 1.0 SDKs include:

  • hooks, for adding arbitrary behavior to feature flag evaluation, ideal for telemetry integration, validation, and logging
  • the Evaluation Context interfaces, used as the basis for dynamic flag evaluation
  • implicit context propagation (JavaScript SDK only)

What's next?

Our goal in the upcoming months will be to harden our existing experimental features. Additionally, we'll work to develop and standardize new capabilities, including: client-side feature flagging, improved cloud native tooling, and implicit transaction-scoped data propagation of contextual attributes. Furthermore, we're working on SDKs for additional languages, including PHP, Python, and Ruby.

If you're interested in contributing or learning more about OpenFeature, please join our expanding and friendly community. Visit our GitHub, join the OpenFeature slack channel on the CNCF Slack instance, or hop into our bi-weekly community meeting.

· 2 min read
Skye Gill

Logging

Logging is the act of keeping a log. A log (in this case) records events that occur in software.

Subject to many opinions and differing principles of best practice, the best thing we could do for the go-sdk was to create an implementation as open & configurable as possible. To achieve this, we've integrated logr, this allows the use of any logger that conforms to its API.

Applications may already have a chosen logging solution at the point of introducing openfeature. An integration with logr may already exist for their chosen solution (integrations exist for many of the popular logger packages in go). If not, they could write their own integration.

Objective

Configure the popular go logger zap with the go-sdk.

Prerequisites

  • Golang 1.17+

Scaffolding

  1. Go get the following dependencies

    go get github.com/open-feature/go-sdk
    go get go.uber.org/zap
    go get github.com/go-logr/logr
    go get github.com/go-logr/zapr # an integration of zap with logr's API
  2. Import all of the above into your main.go and create func main()

    package main

    import (
    "github.com/go-logr/logr"
    "github.com/go-logr/zapr"
    "github.com/open-feature/go-sdk/pkg/openfeature"
    "go.uber.org/zap"
    "go.uber.org/zap/zapcore"
    "context"
    "log"
    )

    func main() {

    }

Integrating the logger

  1. Create the zap logger with preset development config (for the sake of this tutorial)

    func main() {
    zc := zap.NewDevelopmentConfig()
    zc.Level = zap.NewAtomicLevelAt(zapcore.Level(-1)) // the level here decides the verbosity of our logs
    z, err := zc.Build()
    if err != nil {
    log.Fatal(err)
    }
    }
  2. Create the zapr logger (zap logger that conforms to logr's interface)

    l := zapr.NewLogger(z)
  3. Set the logger to the global openfeature singleton

    openfeature.SetLogger(l)
  4. Create an openfeature client and invoke a flag evaluation

    c := openfeature.NewClient("log")

    evalCtx := openfeature.NewEvaluationContext("foo", nil)

    c.BooleanValue(context.Background(), "bar", false, evalCtx)
  5. Check the result of go run main.go

    2022-09-02T14:22:31.109+0100    INFO    openfeature/openfeature.go:76   set global logger
    2022-09-02T14:22:31.110+0100 DEBUG openfeature/client.go:230 evaluating flag {"flag": "bar", "type": "bool", "defaultValue": false, "evaluationContext": {"targetingKey":"foo","attributes":null}, "evaluationOptions": {}}
    2022-09-02T14:22:31.110+0100 DEBUG openfeature/client.go:336 executing before hooks
    2022-09-02T14:22:31.110+0100 DEBUG openfeature/client.go:349 executed before hooks
    2022-09-02T14:22:31.110+0100 DEBUG openfeature/client.go:355 executing after hooks
    2022-09-02T14:22:31.110+0100 DEBUG openfeature/client.go:364 executed after hooks
    2022-09-02T14:22:31.110+0100 DEBUG openfeature/client.go:318 evaluated flag {"flag": "bar", "details": {"FlagKey":"bar","FlagType":0,"Value":false,"ErrorCode":"","Reason":"","Variant":""}, "type": "bool"}
    2022-09-02T14:22:31.110+0100 DEBUG openfeature/client.go:377 executing finally hooks
    2022-09-02T14:22:31.110+0100 DEBUG openfeature/client.go:383 executed finally hooks
  6. (optional) Tweak the level set in step 1 to decrease the verbosity

· 8 min read
James Milligan

Providers

A Provider is responsible for performing flag evaluation, they can be as simple as an interface for a key value store, or act as an abstraction layer for a more complex evaluation system. Only one Provider can be registered at a time, and OpenFeature will no-op if one has not been defined. Before writing your own Provider, it is strongly recommended to familiarize yourself with the OpenFeature spec.
In this tutorial I will demonstrate the steps taken to create a new Provider whilst conforming to the OpenFeature spec using a simple flag implementation. The flag evaluation will be handled by a simple JSON evaluator and flag configurations will be stored as environment variables.

The following section describes how the flag evaluator portion of this project has been constructed, to skip to the Provider specific implementations, click here.

Creating the flag evaluator

The core of any flag Provider is the evaluation logic used to provide the flag values from the provided metadata (referred to as the Evaluation Context). For this example I have put together a very simple json evaluator. Flags are configured using the structs described below, and are stored as environment variables:

type StoredFlag struct {
DefaultVariant string `json:"defaultVariant"`
Variants []Variant `json:"variant"`
}

type Variant struct {
Criteria []Criteria `json:"criteria"`
TargetingKey string `json:"targetingKey"`
Value interface{} `json:"value"`
Name string `json:"name"`
}

type Criteria struct {
Key string `json:"key"`
Value interface{} `json:"value"`
}

example JSON:

{
"defaultVariant":"not-yellow",
"variants": [
{
"name": "yellow-with-key",
"targetingKey":"user",
"criteria": [
{
"color":"yellow"
}
],
"value":true
},
{
"name": "yellow",
"targetingKey":"",
"criteria": [
{
"color":"yellow"
}
],
"value":true
},
{
"name": "not-yellow",
"targetingKey":"",
"criteria": [],
"value":false
}
]
}

Each flag value contains an array of Variants, each with their own array of Criteria. When a flag request needs to be evaluated, the Variants slice is iterated over, if the FlattenedContext matches all required Criteria for a specific Variant, the associated flag value will be returned from the evaluator. If a matching Variant is not found the DefaultVariant is returned in the response. The response also includes the the variant name, the reason for the resulting value (such as ERROR, STATIC or TARGETING_MATCH) and any associated error (such as PARSE_ERROR). These values form the type naive ResolutionDetails structure, which is then wrapped in a type specific parent for each response type, such as BoolResolutionDetail. This will be discussed in the Creating a Compliant Provider section.

import (
"encoding/json"
"errors"
"github.com/open-feature/go-sdk/pkg/openfeature"
"os"
)

func (f *StoredFlag) Evaluate(evalCtx map[string]interface{}) (string, openfeature.Reason, interface{}, error) {
var defaultVariant *Variant
for _, variant := range f.Variants {
if variant.Name == f.DefaultVariant {
defaultVariant = &variant
}
if variant.TargetingKey != "" && variant.TargetingKey != evalCtx["targetingKey"] {
continue
}
match := true
for _, criteria := range variant.Criteria {
val, ok := evalCtx[criteria.Key]
if !ok || val != criteria.Value {
match = false
break
}
}
if match {
return variant.Name, openfeature.TargetingMatchReason, variant.Value, nil
}
}
if defaultVariant == nil {
return "", openfeature.ErrorReason, nil, openfeature.NewParseErrorResolutionError("")
}
return defaultVariant.Name, openfeature.DefaultReason, defaultVariant.Value, nil
}

The above function demonstrates how this basic evaluator will work in this example, of course in other providers these can be far more complex, and the logic does not need to sit within the application.
This JSON evaluator can then be paired with a simple function for reading and parsing the StoredFlag values from environment variables (as seen in the example below), leaving only the integration with the go-sdk to go. (and some testing!)

func FetchStoredFlag(key string) (StoredFlag, error) {
v := StoredFlag{}
if val := os.Getenv(key); val != "" {
if err := json.Unmarshal([]byte(val), &v); err != nil {
return v, openfeature.NewParseErrorResolutionError(err.Error())
}
return v, nil
}
return v, openfeature.NewFlagNotFoundResolutionError("")
}

Creating a Compliant Provider

Repository Setup

Providers written for the go-sdk are all maintained in the go-sdk-contrib repository, containing both hooks and providers.
The following commands can be used to setup the go-sdk-contrib repository, they will clone the repository and set up your provider specific go module under /providers/MY-NEW-PROVIDER-NAME adding a go.mod and README.md file. Your module will them be referenced in the top level go.work file.

git clone https://github.com/open-feature/go-sdk-contrib.git
cd go-sdk-contrib
make PROVIDER=MY-NEW-PROVIDER-NAME new-provider
make workspace-init

Creating Your Provider

In order for your feature flag Provider to be compatible with the OpenFeature go-sdk, it will need to comply with the OpenFeature spec. For the go-sdk this means that it will need to fit to the following interface:

type FeatureProvider interface {
Metadata() Metadata
BooleanEvaluation(flagKey string, defaultValue bool, evalCtx FlattenedContext) BoolResolutionDetail
StringEvaluation(flagKey string, defaultValue string, evalCtx FlattenedContext) StringResolutionDetail
FloatEvaluation(flagKey string, defaultValue float64, evalCtx FlattenedContext) FloatResolutionDetail
IntEvaluation(flagKey string, defaultValue int64, evalCtx FlattenedContext) IntResolutionDetail
ObjectEvaluation(flagKey string, defaultValue interface{}, evalCtx FlattenedContext) InterfaceResolutionDetail
Hooks() []Hook
}
ArgumentDescription
flagKeyA string key representing the flag configuration used in this evaluation
defaultValueThe default response to be returned in the case of an error
evalCtxThe underlying type of FlattenedContext is map[string]interface{}, this provides ambient information for the purposes of flag evaluation. Effectively acting as metadata for a request
ProviderResolutionDetailThe provider response object from a flag evaluation, it contains the following fields: Variant (string), Reason (openfeature.Reason), ResolutionError (openfeature.ResolutionError)
XxxResolutionDetailThe type specific wrapper for the ProviderResolutionDetail struct. Contains two attributes: Value (type specific), ProviderResolutionDetail (ProviderResolutionDetail)

We can use our previously defined logic to build the Evaluation methods with ease, in the below example the core logic has been refactored into a separate function (resolveFlag()) to reduce code repetition, returning the ResolutionDetail structure directly. This means that the only type specific code required is a type cast the returned interface{} value, and for the type specific ResolutionDetail to be returned, e.g. BoolResolutionDetail.

type Provider struct {
EnvFetch func(key string) (StoredFlag, error)
}

func (p *Provider) resolveFlag(flagKey string, defaultValue interface{}, evalCtx openfeature.FlattenedContext) openfeature.InterfaceResolutionDetail {
// fetch the stored flag from environment variables
res, err := p.EnvFetch(flagKey)
if err != nil {
var e openfeature.ResolutionError
if !errors.As(err, &e) {
e = openfeature.NewGeneralResolutionError(err.Error())
}

return openfeature.InterfaceResolutionDetail{
Value: defaultValue,
ProviderResolutionDetail: openfeature.ProviderResolutionDetail{
ResolutionError: e,
Reason: openfeature.ErrorReason,
},
}
}
// evaluate the stored flag to return the variant, reason, value and error
variant, reason, value, err := res.Evaluate(evalCtx)
if err != nil {
var e openfeature.ResolutionError
if !errors.As(err, &e) {
e = openfeature.NewGeneralResolutionError(err.Error())
}
return openfeature.InterfaceResolutionDetail{
Value: defaultValue,
ProviderResolutionDetail: openfeature.ProviderResolutionDetail{
ResolutionError: e,
Reason: openfeature.ErrorReason,
},
}
}

// return the type naive ResolutionDetail structure
return openfeature.InterfaceResolutionDetail{
Value: value,
ProviderResolutionDetail: openfeature.ProviderResolutionDetail{
Variant: variant,
Reason: reason,
},
}
}

func (p *Provider) BooleanEvaluation(flagKey string, defaultValue bool, evalCtx openfeature.FlattenedContext) openfeature.BoolResolutionDetail {
res := p.resolveFlag(flagKey, defaultValue, evalCtx)
// ensure the returned value is a bool
v, ok := res.Value.(bool)
if !ok {
return openfeature.BoolResolutionDetail{
Value: defaultValue,
ProviderResolutionDetail: openfeature.ProviderResolutionDetail{
ResolutionError: openfeature.NewTypeMismatchResolutionError(""),
Reason: openfeature.ErrorReason,
},
}
}
// wrap the ResolutionDetail in a type specific parent
return openfeature.BoolResolutionDetail{
Value: v,
ProviderResolutionDetail: res.ProviderResolutionDetail,
}
}

Based upon this BooleanEvaluation method, the remaining Evaluation methods are simple to populate, leaving only 2 more methods, Metadata and Hooks.

the Metadata() method is very simple to implement, it simply needs to return a Metadata object, currently this object only requires one field - Name

func (p *Provider) Metadata() openfeature.Metadata {
return openfeature.Metadata{
Name: "environment-flag-evaluator",
}
}

The Hooks() method gives the go-sdk access to Provider hooks, this sits outside the scope of this tutorial, so for now we will just return an empty slice of hooks. Link to spec.

func (p *Provider) Hooks() []openfeature.Hook {
return []openfeature.Hook{}
}

Now that the Provider conforms to the OpenFeature spec, it can be registered to the OpenFeature go-sdk and used for flag evaluation.

Example usage

package main

import (
"context"
"encoding/json"
"fmt"
"os"

fromEnv "github.com/open-feature/go-sdk-contrib/providers/from-env/pkg"
"github.com/open-feature/go-sdk/pkg/openfeature"
)

// init function sets a bool flag environment variable called AM_I_YELLOW
func init() {
flagDefinition := fromEnv.StoredFlag{
DefaultVariant: "not-yellow",
Variants: []fromEnv.Variant{
{
Name: "yellow-with-targeting",
TargetingKey: "user",
Value: true,
Criteria: []fromEnv.Criteria{
{
Key: "color",
Value: "yellow",
},
},
},
{
Name: "yellow",
TargetingKey: "",
Value: true,
Criteria: []fromEnv.Criteria{
{
Key: "color",
Value: "yellow",
},
},
},
{
Name: "not-yellow",
TargetingKey: "",
Value: false,
Criteria: []fromEnv.Criteria{
{
Key: "color",
Value: "not yellow",
},
},
},
},
}
flagM, _ := json.Marshal(flagDefinition)
os.Setenv("AM_I_YELLOW", string(flagM))
}

func main() {
// create instance of the new provider
provider := fromEnv.FromEnvProvider{}
// register the provider against the go-sdk
openfeature.SetProvider(&provider)
// create a client from via the go-sdk
client := openfeature.NewClient("am-i-yellow-client")

// we are now able to evaluate our stored flags, providing different FlattenedContexts to manipulate the response
fmt.Println("I am yellow!")
boolRes, err := client.BooleanValueDetails(
context.Background(),
"AM_I_YELLOW",
false,
openfeature.NewEvaluationContext(
"",
map[string]interface{}{
"color": "yellow",
},
),
)
printResponse(boolRes.Value, boolRes.ResolutionDetail, err)

fmt.Println("I am yellow with targeting!")
boolRes, err = client.BooleanValueDetails(
context.Background(),
"AM_I_YELLOW",
false,
openfeature.NewEvaluationContext(
"user",
map[string]interface{}{
"color": "yellow",
},
),
)
printResponse(boolRes.Value, boolRes.ResolutionDetail, err)

fmt.Println("I am asking for a string!")
strRes, err := client.StringValueDetails(
context.Background(),
"AM_I_YELLOW",
"i am a default value",
openfeature.NewEvaluationContext(
"",
map[string]interface{}{
"color": "not yellow",
},
),
)
printResponse(strRes.Value, strRes.ResolutionDetail, err)
}

// simple response printing function
func printResponse(value interface{}, resDetail openfeature.ResolutionDetail, err error) {
fmt.Printf("value: %v\n", value)
if err != nil {
fmt.Printf("error: %v\n", err)
} else {
fmt.Printf("variant: %v\n", resDetail.Variant)
fmt.Printf("reason: %v\n", resDetail.Reason)
}
fmt.Println("--------")
}

Console output:

I am yellow!
value: true
variant: yellow
reason: TARGETING_MATCH
--------
I am yellow with targeting!
value: true
variant: yellow-with-targeting
reason: TARGETING_MATCH
--------
I am asking for a string!
value: i am a default value
error: evaluate the flag: TYPE_MISMATCH
--------

The provider used in this example can be found here

· 3 min read
Skye Gill

Hooks

A Hook taps into one or more of the flag evaluation's lifecycle events (before/after/error/finally) to perform the same action at that point for every evaluation.

Objective

Create and integrate a spec compliant hook that validates that the result of a flag evaluation is a hex color.

Prerequisites

  • Golang 1.17+

Repository setup

Hooks written for the go-sdk are all maintained in the go-sdk-contrib repository, containing both hooks and providers. The following commands can be used to setup the go-sdk-contrib repository, this clones the repository and sets up your hook specific go module under /hooks/MY-NEW-HOOK-NAME adding a go.mod and README.md file. The module will then be referenced in the top level go.work file.

git clone https://github.com/open-feature/go-sdk-contrib.git
cd go-sdk-contrib
make HOOK=MY-NEW-HOOK-NAME new-hook
make workspace-init

Creating the hook

In order for the Hook to be compatible with the OpenFeature go-sdk, it will need to comply to the OpenFeature spec. For the go-sdk this means that it will need to conform to the following interface:

type Hook interface {
Before(hookContext HookContext, hookHints HookHints) (*EvaluationContext, error)
After(hookContext HookContext, flagEvaluationDetails InterfaceEvaluationDetails, hookHints HookHints) error
Error(hookContext HookContext, err error, hookHints HookHints)
Finally(hookContext HookContext, hookHints HookHints)
}

In order to conform to the interface we are forced to implement all of these functions, despite only wanting to tap into the After lifecycle event. Let's leave the other functions empty to achieve this:

// Hook validates the flag evaluation details After flag resolution
type Hook struct {
Validator validator
}

func (h Hook) Before(hookContext of.HookContext, hookHints of.HookHints) (*of.EvaluationContext, error) {
return nil, nil
}

func (h Hook) After(hookContext of.HookContext, flagEvaluationDetails of.InterfaceEvaluationDetails, hookHints of.HookHints) error {
err := h.Validator.IsValid(flagEvaluationDetails)
if err != nil {
return err
}

return nil
}

func (h Hook) Error(hookContext of.HookContext, err error, hookHints of.HookHints) {}

func (h Hook) Finally(hookContext of.HookContext, hookHints of.HookHints) {}

Notice the Validator field of type validator in the Hook struct. This is defined as such:

type validator interface {
IsValid(flagEvaluationDetails of.InterfaceEvaluationDetails) error
}

This allows us to supply any validator that implements this function signature, you can either create your own validator or use one of the existing validators. This tutorial uses the existing hex regex validator.

Integrating the hook

  1. Install dependencies

    go get github.com/open-feature/go-sdk
    go get github.com/open-feature/go-sdk-contrib/hooks/validator
  2. Import the dependencies

    package main

    import (
    "context"
    "fmt"
    "github.com/open-feature/go-sdk-contrib/hooks/validator/pkg/regex"
    "github.com/open-feature/go-sdk-contrib/hooks/validator/pkg/validator"
    "github.com/open-feature/go-sdk/pkg/openfeature"
    "log"
    )
  3. Create an instance of the validator hook struct using the regex hex validator

    func main() {
    hexValidator, err := regex.Hex()
    if err != nil {
    log.Fatal(err)
    }
    v := validator.Hook{Validator: hexValidator}
    }
  4. Register the NoopProvider, this simply returns the given default value on flag evaluation.

    This step is optional, the sdk uses the NoopProvider by default but we're explicitly setting it for completeness

    openfeature.SetProvider(openfeature.NoopProvider{})
  5. Create the client, call the flag evaluation using the validator hook at the point of invocation

    client := openfeature.NewClient("foo")

    result, err := client.
    StringValueDetails(
    context.Background(),
    "blue",
    "invalidhex",
    openfeature.EvaluationContext{},
    openfeature.WithHooks(v),
    )
    if err != nil {
    fmt.Println("err:", err)
    }
    fmt.Println("result:", result)
  6. Check that the flag evaluation returns an error as invalidhex is not a valid hex color

    go run main.go
    err: execute after hook: regex doesn't match on flag value
    result {blue 1 {invalidhex }}

    Note that despite getting an error we still get a result.

· 6 min read
Pete Hodgson

I've recently been involved1 in OpenFeature, an effort to define a standard API and SDK for feature flagging. At first glance, you might wonder whether feature flagging needs a standard. It's just a function call and an if statement, right? Well, no, not really. I'll explain why, and then talk about some of the benefits that I hope OpenFeature will bring to the space.

The Feature Flagging iceberg

When I talk to people about adopting feature flags, I often describe feature flag management as a bit of an iceberg. On the surface, feature flagging seems really simple, almost trivial. You call a function to find out the state of a flag, and then you either go down one code path, or the other. However, once you get into it turns out that there's a fair bit of complexity lurking under the surface.

Organizations that begin using feature flags at any sort of scale quickly learn that they need some of that functionality lurking under the surface. This is why flag management platforms like LaunchDarkly and Split.io exist. Their value is not in providing a fancy if statement, it's in all those extra features lurking below the surface - a web-based management interface, the ability to perform controlled incremental rollout, permissions and audit trails, integration into analytics systems, and so on.

Everybody needs an SDK

While most of the value of a flag management platform lies beneath the surface, each platform still has to provide that surface-level capability - the ability to evaluate a flag at runtime. And that ability needs to be available in each tech stack. So every flag management vendor ends up maintaining a small flock of feature flagging SDKs in various tech stacks. Even when we're just talking about glorified if statements, this is actually a lot of work, and it's work that is duplicated by each feature management platform.

This is where OpenFeature comes in. By defining a standard API and providing a common SDK, it allows vendors to focus on just implementing a small vendor-specific integration kernel (a "provider") in each language, which then plugs into the OpenFeature SDK. This leaves the bulk of the flag evaluation functionality in a given tech stack to be built once in a shared, vendor-neutral implementation.

OpenTelemetry, but for feature flags

The model here is similar to the (very successful) OpenTelemetry project in the observability space. By defining a shared, open core OpenTelemetry has allowed vendors in the observability space to work within a shared ecosystem of open-source instrumentation libraries.

Rather than each vendor building their own library to instrument the myriad libraries and runtimes that are out there, every system can use the same shared set of instrumentation libraries. Looking in the Javascript ecosystem, for example, there are instrumentation libraries for Next.js, Express, Fastify, Mongo, knex, typeorm, Redis, GraphQL, and many more.

From Effort(n*m) to Effort(n+m)

Before OpenTelemetry, it wasn't really feasible to develop really high-quality instrumentation for every library. Observability vendors didn't have the capacity or deep experience to build rich, idiomatic instrumentation into every library out there, and the library maintainers certainly didn't have the bandwidth to create instrumentation support for every observability platform. OpenTelemetry solved this by creating a single target for both sides to support. It turned an N*M problem into an N+M problem.

My hope is that OpenFeature will do the same for feature flagging. This can happen at two levels. Firstly, OpenFeature will provide a single target for framework-specific feature flag evaluation, which can be done in a really idiomatic, ergonomic way. For example, a React feature flagging client can provide flag evaluation mechanisms using hooks, providers and components, rather than plain old Javascript functions. This idiomatic addition is an example which various vendors have made already, but such ergonomic improvements are spread across each vendor-specific library, and frankly none of them are perfect. If we could get to a single, vendor-neutral library then engineering effort could be focused in one place, and users of the client would not need to re-learn a slightly different API for each vendor. This same situation applies for every runtime that needs feature flag evaluation - we can build a single, standard feature flag evaluation client for Android, React Native, iOS, Spring Boot, Elixir, Django, Fastify, Vertx, gin, and so on. And each of these clients can provide a rich, idiomatic API that provides a great developer experience in each of these frameworks.

Flag evaluation requires context

As a side note, "feature flag evaluation" means more than just "check a flag and return a boolean". Any non-trivial flag evaluation also requires context - which user is this flag being evaluated for, or which demographic market, or which environment, or which server. That contextual information is often available in one place (a request handler for example) but needed in another place - at the point a flag is being evaluated. A naive feature flagging client forces the developer to track this context themselves, so that they can pass it into the client during flag evaluation. A delightful feature flagging client provides the ability for the context to be recorded in one place (often using thread-local storage or an equivalent), and then automatically applied during flag evaluation. Most flagging clients do not do this automatically, because of this N*M problem. OpenFeature would make this sort of delightful experience much more feasible. It would be very straightforward to write a little extension to an authentication library such as Passport.js that would record context about the current user into OpenFeature, so that it's automatically applied during any subsequent flag evaluation.

A win-win-win

My hope is that OpenFeature will provide a benefit that's greater than the sum of its parts, something that's a win for vendors, for open-source maintainers, and for teams using feature flags. Flag management platforms will be freed from having to each maintain their tiresome heap of flag evaluation clients. Framework communities will have the opportunity to write rich, idiomatic flag evaluation clients which target a standard, vendor-neutral flag evaluation API. Finally, developers using feature flags will have more ergonomic and delightful feature flagging capabilities, using an API which remains constant no matter which feature flagging platform they're using.

OpenFeature is actively looking for more participants. If you'd like to get involved, don't be shy! Join a community call, or join the #OpenFeature CNCF slack channel, and help us build a great open standard that benefits the industry.


  1. I'm currently serving as a member of the Bootstrap Governing Committee