-
Notifications
You must be signed in to change notification settings - Fork 130
Add CEL validation test for targetRef
in ClientSettingsPolicy
#3623
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
base: main
Are you sure you want to change the base?
Conversation
Hi @shaun-nx! Welcome to the project! 🎉 Thanks for opening this pull request! |
Codecov Report✅ All modified and coverable lines are covered by tests. Additional details and impacted files@@ Coverage Diff @@
## main #3623 +/- ##
==========================================
- Coverage 87.05% 87.03% -0.02%
==========================================
Files 127 127
Lines 15602 15602
Branches 62 62
==========================================
- Hits 13582 13579 -3
- Misses 1861 1863 +2
- Partials 159 160 +1 ☔ View full report in Codecov by Sentry. 🚀 New features to boost your workflow:
|
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I think what we want is to make sure our CEL validation is correct - this test file is testing Go logic rather than our actual validation (that is, we need to test the CEL expressions themselves, not the logic we hope they are enforcing).
There is a package we could probably use for cel validation k8s.io/apiserver/pkg/cel
which might be useful.
Or, now that I've read the parent issue, we could follow the pattern set in the Gateway API and simply run some functional tests where we actually create the resources and test the validation is working that way. |
@ciarams87, though looking at the CEL tests in the Gateway API, https://github.com/kubernetes-sigs/gateway-api/blob/main/pkg/test/cel/httproute_test.go, doesn't this PR follow the same standard set by them? EDIT: oh actually nvm i see where in the gateway api they create the resource. |
Also regards to file structure, i would support just doing a single test file per CRD similar to how the gateway api repo does it. Thus the tests would just be like And i would kinda agree with @ciarams87, though not too sure if a functional test is needed, but would instead follow whats done in the gateway api and create the actual crd object through k8sclient and validate the errors like this:
|
Thanks @bjee19 and @ciarams87 for the inputs! I didn't notice originally that the cel tests in the in the gateway api created resources in the test. I'll update my tests now to better follow that pattern. I'll also adjust the filename as per your suggestion @bjee19 :) |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I probably should have called it an integration test maybe, but yes that's what I meant @bjee19 - create the actual crd object through k8sclient. That way, we are testing the actual CEL expressions in the CRD. @bjee19 gave you an example from the Gateway API already, but here is one specific to ClientSettingsPolicy for clarity:
func TestClientSettingsPolicyValidation(t *testing.T) {
ctx := context.Background()
testCases := []struct {
desc string
policy *ngfAPIv1alpha1.ClientSettingsPolicy
wantErrors []string
}{
{
desc: "valid Gateway targetRef",
policy: &ngfAPIv1alpha1.ClientSettingsPolicy{
ObjectMeta: metav1.ObjectMeta{
Name: "test-policy",
Namespace: "default",
},
Spec: ngfAPIv1alpha1.ClientSettingsPolicySpec{
TargetRef: gatewayv1alpha2.LocalPolicyTargetReference{
Kind: "Gateway",
Group: "gateway.networking.k8s.io",
Name: "test-gateway",
},
},
},
// No wantErrors = should succeed
},
{
desc: "invalid targetRef kind",
policy: &ngfAPIv1alpha1.ClientSettingsPolicy{
ObjectMeta: metav1.ObjectMeta{
Name: "test-policy-invalid",
Namespace: "default",
},
Spec: ngfAPIv1alpha1.ClientSettingsPolicySpec{
TargetRef: gatewayv1alpha2.LocalPolicyTargetReference{
Kind: "Service", // Invalid
Group: "gateway.networking.k8s.io",
Name: "test-service",
},
},
},
wantErrors: []string{"TargetRef Kind must be one of: Gateway, HTTPRoute, or GRPCRoute"},
},
}
for _, tc := range testCases {
t.Run(tc.desc, func(t *testing.T) {
err := k8sClient.Create(ctx, tc.policy)
if (len(tc.wantErrors) != 0) != (err != nil) {
t.Fatalf("got err=%v; want error=%v", err, tc.wantErrors != nil)
}
if err != nil {
for _, wantError := range tc.wantErrors {
if !strings.Contains(err.Error(), wantError) {
t.Errorf("missing expected error %q in %v", wantError, err)
}
}
}
})
}
}
Notice we are calling err := k8sClient.Create(ctx, tc.policy)
to create the actual Policy object, and then asserting on that creation - if we are creating an invalid policy, we expect an error, and we assert the error is one that we expect. Otherwise, we are testing Go logic in the test file instead of the actual CEL validation rules that Kubernetes will enforce.
Your current approach creates a ClientSettingsPolicy
struct in memory and then checks it against a Go map, which doesn't invoke the actual CEL validation. When users apply the CRD to a Kubernetes cluster, it's the CEL expressions defined in the CRD spec that validate the resource.
By using k8sClient.Create()
, we're testing the exact same validation path that real users will experience: the Kubernetes API server will evaluate your CRD's CEL rules against the incoming resource and either accept or reject it with the actual error messages that CEL produces.
Hopefully this makes sense!
@bjee19 @sjberman I made an update in this commit to initialize the k8sClient at the start of each test function instead of within each test iteration. This is mainly to just have each test use their own k8sClient and not make a new one for each test cases |
Proposed changes
Add tests for CEL validation test for targetRef in ClientSettingsPolicy
Resolves
targetRef
CEL validation requirement of 3621Checklist
Before creating a PR, run through this checklist and mark each as complete.