Compare commits

...

572 Commits

Author SHA1 Message Date
254fe1556a v0.0.593 made PubSub more generic (namespace can be any comparable type)
All checks were successful
Build Docker and Deploy / Run goext test-suite (push) Successful in 2m54s
2025-07-16 12:50:36 +02:00
52e74b59f5 v0.0.592
All checks were successful
Build Docker and Deploy / Run goext test-suite (push) Successful in 3m30s
2025-07-16 12:46:18 +02:00
64f2cd7219 v0.0.591 implement namespaced PubSub Broker in dataext
Some checks failed
Build Docker and Deploy / Run goext test-suite (push) Has been cancelled
2025-07-16 12:44:55 +02:00
a29aec8fb5 v0.0.590 more rfctime equal fixes for chris
All checks were successful
Build Docker and Deploy / Run goext test-suite (push) Successful in 2m34s
2025-07-15 14:14:22 +02:00
8ea9b3f79f v0.0.589 improve Equal method of rfctime structs - prevents panic in cmp library
Some checks failed
Build Docker and Deploy / Run goext test-suite (push) Failing after 1m48s
2025-07-15 13:46:36 +02:00
a4b2a0589f v0.0.588
All checks were successful
Build Docker and Deploy / Run goext test-suite (push) Successful in 2m37s
2025-07-11 11:50:29 +02:00
4ef5f6059b v0.0.587
All checks were successful
Build Docker and Deploy / Run goext test-suite (push) Successful in 2m37s
2025-07-06 22:24:44 +02:00
b23a444aa2 v0.0.586
All checks were successful
Build Docker and Deploy / Run goext test-suite (push) Successful in 2m33s
2025-07-04 13:56:53 +02:00
09932046f8 v0.0.585
All checks were successful
Build Docker and Deploy / Run goext test-suite (push) Successful in 2m36s
2025-07-04 11:46:00 +02:00
37e52595a2 v0.0.584
All checks were successful
Build Docker and Deploy / Run goext test-suite (push) Successful in 2m33s
2025-06-26 16:48:07 +02:00
95d7c90492 v0.0.583
All checks were successful
Build Docker and Deploy / Run goext test-suite (push) Successful in 2m31s
2025-06-25 10:59:23 +02:00
23a3235c7e v0.0.582
All checks were successful
Build Docker and Deploy / Run goext test-suite (push) Successful in 2m33s
2025-06-25 10:51:38 +02:00
506d276962 v0.0.581
All checks were successful
Build Docker and Deploy / Run goext test-suite (push) Successful in 3m39s
2025-06-25 10:28:54 +02:00
2a0cf84416 v0.0.580 Add IsZero() to generated ID types
All checks were successful
Build Docker and Deploy / Run goext test-suite (push) Successful in 2m33s
2025-06-16 08:40:07 +02:00
073aa84dd4 v0.0.579 fix StackSkip count on exerr zero-logger for Build()
All checks were successful
Build Docker and Deploy / Run goext test-suite (push) Successful in 2m29s
2025-06-13 16:53:47 +02:00
a0dc9e92e4 v0.0.578
All checks were successful
Build Docker and Deploy / Run goext test-suite (push) Successful in 2m30s
2025-06-11 14:37:51 +02:00
98c591b019 v0.0.577
All checks were successful
Build Docker and Deploy / Run goext test-suite (push) Successful in 3m7s
2025-05-18 21:42:18 +02:00
a93b93a3cd v0.0.576
All checks were successful
Build Docker and Deploy / Run goext test-suite (push) Successful in 3m26s
2025-05-18 14:26:24 +02:00
49bc52d63e v0.0.575 DelayedCombiningInvoker
All checks were successful
Build Docker and Deploy / Run goext test-suite (push) Successful in 3m31s
2025-05-11 19:17:05 +02:00
959020e3c0 v0.0.574 add syncMap.clear()
All checks were successful
Build Docker and Deploy / Run goext test-suite (push) Successful in 3m37s
2025-05-07 15:28:15 +02:00
395e83acf6 panic bf url
All checks were successful
Build Docker and Deploy / Run goext test-suite (push) Successful in 3m36s
2025-05-06 19:17:49 +02:00
55ff89f179 v0.0.572 switch to git.blackforestbytes.com as module name
Some checks failed
Build Docker and Deploy / Run goext test-suite (push) Has been cancelled
2025-05-03 16:43:59 +02:00
cbaa283f74 v0.0.571 add AsAnyPtr() function to ids
All checks were successful
Build Docker and Deploy / Run goext test-suite (push) Successful in 2m45s
2025-04-30 21:06:24 +02:00
20fb1f5601 v0.0.570 add gin_host exerr metadata (for gin-auto-fields)
All checks were successful
Build Docker and Deploy / Run goext test-suite (push) Successful in 2m35s
2025-04-25 23:20:25 +02:00
cc58639306 v0.0.569
All checks were successful
Build Docker and Deploy / Run goext test-suite (push) Successful in 2m30s
2025-04-07 15:47:50 +02:00
cea822ffa6 v0.0.568 remove duplicate ids in ExErr.UniqueIDs
All checks were successful
Build Docker and Deploy / Run goext test-suite (push) Successful in 2m38s
2025-03-15 22:29:45 +01:00
c20ae20cc1 v0.0.567 Add ListenerOpt to exerr.RegisterListener (this is a breking API change !! -- but will prevent more breakage later on...)
All checks were successful
Build Docker and Deploy / Run goext test-suite (push) Successful in 2m33s
2025-03-06 12:19:03 +01:00
f07cd79b96 v0.0.566
All checks were successful
Build Docker and Deploy / Run goext test-suite (push) Successful in 2m35s
2025-02-28 21:46:26 +01:00
164c462b96 v0.0.565
All checks were successful
Build Docker and Deploy / Run goext test-suite (push) Successful in 2m33s
2025-02-28 21:43:36 +01:00
5e6cb63f14 v0.0.564 always return non-nil ctx from ginext.Start() (improves nilaway)
All checks were successful
Build Docker and Deploy / Run goext test-suite (push) Successful in 2m7s
2025-02-10 13:04:05 +01:00
4832aa9d6c v0.0.563 Add 'ArrContains' alias for 'InArray'
All checks were successful
Build Docker and Deploy / Run goext test-suite (push) Successful in 2m7s
2025-01-31 21:16:42 +01:00
4d606d3131 v0.0.562 bf
All checks were successful
Build Docker and Deploy / Run goext test-suite (push) Successful in 2m4s
2025-01-29 11:24:20 +01:00
be9b9e8ccf v0.0.561 wmo PaginateIterateFunc+PaginateIterate
Some checks failed
Build Docker and Deploy / Run goext test-suite (push) Failing after 1m31s
2025-01-29 11:02:41 +01:00
28cdfc5bd2 v0.0.560 wmo ListIterateFunc + ListIterate
Some checks failed
Build Docker and Deploy / Run goext test-suite (push) Failing after 1m31s
2025-01-29 10:54:53 +01:00
10a6627323 v0.0.559 Add .Iterate and .IterateFunc methods to wmo
Some checks failed
Build Docker and Deploy / Run goext test-suite (push) Failing after 1m33s
2025-01-28 15:55:18 +01:00
06b3b4116e v0.0.558 update gojson (rebase onto go1.23.4)
All checks were successful
Build Docker and Deploy / Run goext test-suite (push) Successful in 2m19s
2025-01-10 15:37:04 +01:00
ff821390f7 Apply goext specific patches to gojson
Some checks failed
Build Docker and Deploy / Run goext test-suite (push) Has been cancelled
2025-01-10 15:35:14 +01:00
c8e9c34706 Reset gojson to golang/go|1.23.4 [removes all custom changes] 2025-01-10 11:49:29 +01:00
b7c48cb467 v0.0.556
All checks were successful
Build Docker and Deploy / Run goext test-suite (push) Successful in 3m29s
2025-01-09 10:41:00 +01:00
a0a80899f5 v0.0.555
Some checks failed
Build Docker and Deploy / Run goext test-suite (push) Has been cancelled
2025-01-09 10:39:56 +01:00
3543441b96 v0.0.554
Some checks failed
Build Docker and Deploy / Run goext test-suite (push) Has been cancelled
2025-01-09 10:39:31 +01:00
eef12da4e6 v0.0.553
All checks were successful
Build Docker and Deploy / Run goext test-suite (push) Successful in 3m24s
2025-01-09 10:29:22 +01:00
d009aafd4e v0.0.552 mathext.ClampOpt
All checks were successful
Build Docker and Deploy / Run goext test-suite (push) Successful in 2m50s
2025-01-05 03:40:15 +01:00
f7b4aa48d7 v0.0.551 change exerr.RecursiveMessage() logic: use messages of Wrap() if not empty
All checks were successful
Build Docker and Deploy / Run goext test-suite (push) Successful in 2m9s
2025-01-04 02:33:49 +01:00
36b092774d v0.0.550 ArrMapSum
All checks were successful
Build Docker and Deploy / Run goext test-suite (push) Successful in 2m12s
2024-12-26 00:23:24 +01:00
a8c6e39ac5 v0.0.549
All checks were successful
Build Docker and Deploy / Run goext test-suite (push) Successful in 2m4s
2024-12-10 13:24:06 +01:00
62f2ce9268 v0.0.548
All checks were successful
Build Docker and Deploy / Run goext test-suite (push) Successful in 2m56s
2024-12-09 17:39:35 +01:00
49375e90f0 v0.0.547 allow calling ListWithCount with pageSize=0
All checks were successful
Build Docker and Deploy / Run goext test-suite (push) Successful in 2m55s
2024-12-08 18:04:04 +01:00
d8cf255c80 v0.0.546 Fix ginext json-parse error when the bufferedReader was read beforehand
All checks were successful
Build Docker and Deploy / Run goext test-suite (push) Successful in 2m55s
2024-11-28 12:06:57 +01:00
b520282ba0 v0.0.545
All checks were successful
Build Docker and Deploy / Run goext test-suite (push) Successful in 2m6s
2024-11-27 13:21:45 +01:00
27cc9366b5 v0.0.544
All checks were successful
Build Docker and Deploy / Run goext test-suite (push) Successful in 3m0s
2024-11-26 15:10:27 +01:00
d9517fe73c v0.0.543
All checks were successful
Build Docker and Deploy / Run goext test-suite (push) Successful in 2m7s
2024-11-13 15:03:51 +01:00
8a92a6cc52 v0.0.542
All checks were successful
Build Docker and Deploy / Run goext test-suite (push) Successful in 2m9s
2024-11-07 13:13:12 +01:00
9b2028ab54 v0.0.541
All checks were successful
Build Docker and Deploy / Run goext test-suite (push) Successful in 3m3s
2024-11-05 14:38:42 +01:00
207fd331d5 v0.0.540 handle ct=nil same as ct=Start
All checks were successful
Build Docker and Deploy / Run goext test-suite (push) Successful in 2m25s
2024-10-28 14:35:05 +01:00
54b0d6701d v0.0.539
All checks were successful
Build Docker and Deploy / Run goext test-suite (push) Successful in 2m49s
2024-10-27 02:21:35 +02:00
fc2657179b v0.0.538
All checks were successful
Build Docker and Deploy / Run goext test-suite (push) Successful in 2m7s
2024-10-27 02:18:45 +02:00
d4894e31fe Merge branch 'cursortoken-paginated'
Some checks failed
Build Docker and Deploy / Run goext test-suite (push) Has been cancelled
2024-10-27 02:18:25 +02:00
0ddfaf666b v0.0.537 fix goext error print always showing error-type of highest-level error
All checks were successful
Build Docker and Deploy / Run goext test-suite (push) Successful in 2m12s
2024-10-26 23:38:13 +02:00
e154137105 Trying out paginated cursortoken variant [UNTESTED]
All checks were successful
Build Docker and Deploy / Run goext test-suite (push) Successful in 2m11s
2024-10-25 09:45:42 +02:00
9b9a79b4ad v0.0.536 revert bfcodegen changes
All checks were successful
Build Docker and Deploy / Run goext test-suite (push) Successful in 2m24s
2024-10-22 09:57:06 +02:00
5a8d7110e4 v0.0.535
All checks were successful
Build Docker and Deploy / Run goext test-suite (push) Successful in 2m52s
2024-10-22 09:41:59 +02:00
d47c84cd47 v0.0.534
Some checks failed
Build Docker and Deploy / Run goext test-suite (push) Has been cancelled
2024-10-22 09:40:53 +02:00
c571f3f888 v0.0.533 Made String() functions in bfcodegen nil-ptr safe
All checks were successful
Build Docker and Deploy / Run goext test-suite (push) Successful in 2m9s
2024-10-22 09:36:40 +02:00
e884ba6b89 v0.0.532
All checks were successful
Build Docker and Deploy / Run goext test-suite (push) Successful in 2m53s
2024-10-13 16:33:39 +02:00
1a8e31e5ef v0.0.531
All checks were successful
Build Docker and Deploy / Run goext test-suite (push) Successful in 2m58s
2024-10-13 16:10:55 +02:00
eccc0fe9e5 v0.0.530
All checks were successful
Build Docker and Deploy / Run goext test-suite (push) Successful in 3m31s
2024-10-09 11:15:26 +02:00
c8dec24a0d v0.0.529 handle PanicWrappedErr in exerr.FromError()
All checks were successful
Build Docker and Deploy / Run goext test-suite (push) Successful in 3m13s
2024-10-08 19:22:17 +02:00
b8cb989e54 v0.0.528
All checks were successful
Build Docker and Deploy / Run goext test-suite (push) Successful in 4m38s
2024-10-07 17:20:40 +02:00
ec672fbd49 v0.0.527
Some checks failed
Build Docker and Deploy / Run goext test-suite (push) Has been cancelled
2024-10-07 17:19:30 +02:00
cfb0b53fc7 v0.0.526
All checks were successful
Build Docker and Deploy / Run goext test-suite (push) Successful in 3m17s
2024-10-05 23:59:23 +02:00
a7389f44fa v0.0.525 upgrad go1.22 -> go1.23
All checks were successful
Build Docker and Deploy / Run goext test-suite (push) Successful in 3m4s
2024-10-05 02:45:20 +02:00
69f0fedd66 v0.0.524
Some checks failed
Build Docker and Deploy / Run goext test-suite (push) Failing after 1m22s
2024-10-05 01:41:10 +02:00
335ef4d8e8 v0.0.523 ringbuffer
Some checks failed
Build Docker and Deploy / Run goext test-suite (push) Failing after 1m23s
2024-10-05 01:28:46 +02:00
61801ff20d v0.0.522
All checks were successful
Build Docker and Deploy / Run goext test-suite (push) Successful in 2m53s
2024-10-05 01:12:00 +02:00
361dca5c85 v0.0.521 ctxext
All checks were successful
Build Docker and Deploy / Run goext test-suite (push) Successful in 2m56s
2024-10-05 01:06:36 +02:00
9f85a243e8 v0.0.520
All checks were successful
Build Docker and Deploy / Run goext test-suite (push) Successful in 4m7s
2024-10-05 01:02:25 +02:00
dc6cb274ee v0.0.519
Some checks failed
Build Docker and Deploy / Run goext test-suite (push) Has been cancelled
2024-10-05 00:58:15 +02:00
f6b47792a4 v0.0.518 Improve sq db-listener interface (breaking)
All checks were successful
Build Docker and Deploy / Run goext test-suite (push) Successful in 4m11s
2024-10-05 00:45:55 +02:00
295b3ef793 v0.0.517 add constructor funcs for tuples
All checks were successful
Build Docker and Deploy / Run goext test-suite (push) Successful in 4m21s
2024-10-02 11:31:34 +02:00
721c176337 v0.0.516
All checks were successful
Build Docker and Deploy / Run goext test-suite (push) Successful in 4m25s
2024-09-25 21:43:41 +02:00
ebba6545a3 v0.0.515
All checks were successful
Build Docker and Deploy / Run goext test-suite (push) Successful in 5m35s
2024-09-16 17:39:51 +02:00
19c7e22ced v0.0.514 fix mongo filter where the primary sort key is null in db (fallback to secondary)
Some checks failed
Build Docker and Deploy / Run goext test-suite (push) Has been cancelled
2024-09-16 17:39:18 +02:00
9f883b458f v0.0.513
All checks were successful
Build Docker and Deploy / Run goext test-suite (push) Successful in 4m55s
2024-09-16 15:27:32 +02:00
1f456c5134 v0.0.512
All checks were successful
Build Docker and Deploy / Run goext test-suite (push) Successful in 5m6s
2024-09-15 21:25:21 +02:00
d7fbef37db v0.0.511
All checks were successful
Build Docker and Deploy / Run goext test-suite (push) Successful in 3m10s
2024-09-15 18:22:07 +02:00
a1668b6e5a v0.0.510
All checks were successful
Build Docker and Deploy / Run goext test-suite (push) Successful in 4m24s
2024-09-13 18:06:49 +02:00
3a17edfaf0 v0.0.509
All checks were successful
Build Docker and Deploy / Run goext test-suite (push) Successful in 6m2s
2024-08-26 14:35:49 +02:00
3320a9c19d v0.0.508
All checks were successful
Build Docker and Deploy / Run goext test-suite (push) Successful in 4m25s
2024-08-25 17:36:20 +02:00
8dcd8a270a v0.0.507 fix jsonfilter:"-" not working
All checks were successful
Build Docker and Deploy / Run goext test-suite (push) Successful in 7m7s
2024-08-25 15:41:17 +02:00
03a9b276d8 v0.0.506 allow empty-string as value for enum
Some checks failed
Build Docker and Deploy / Run goext test-suite (push) Failing after 7m36s
2024-08-22 11:45:02 +02:00
9c8cde384f v0.0.505
All checks were successful
Build Docker and Deploy / Run goext test-suite (push) Successful in 6m17s
2024-08-08 15:57:05 +02:00
99b000ecf4 v0.0.504
All checks were successful
Build Docker and Deploy / Run goext test-suite (push) Successful in 3m53s
2024-08-07 19:44:45 +02:00
a173e30090 v0.0.503
All checks were successful
Build Docker and Deploy / Run goext test-suite (push) Successful in 3m52s
2024-08-07 19:37:38 +02:00
a3481a7d2d v0.0.502
Some checks failed
Build Docker and Deploy / Run goext test-suite (push) Has been cancelled
2024-08-07 19:35:23 +02:00
a8e6f98a89 v0.0.501
Some checks failed
Build Docker and Deploy / Run goext test-suite (push) Has been cancelled
2024-08-07 19:31:36 +02:00
ab805403b9 v0.0.500
Some checks failed
Build Docker and Deploy / Run goext test-suite (push) Has been cancelled
2024-08-07 19:30:38 +02:00
1e98d351ce v0.0.499
All checks were successful
Build Docker and Deploy / Run goext test-suite (push) Successful in 5m24s
2024-08-07 18:34:22 +02:00
c40bdc8e9e v0.0.498
All checks were successful
Build Docker and Deploy / Run goext test-suite (push) Successful in 4m13s
2024-08-07 17:26:35 +02:00
7204562879 v0.0.497
Some checks failed
Build Docker and Deploy / Run goext test-suite (push) Failing after 2m33s
2024-08-07 17:04:59 +02:00
741611a2e1 v0.0.496 wpdf fixes and wpdf test.go
All checks were successful
Build Docker and Deploy / Run goext test-suite (push) Successful in 4m58s
2024-08-07 15:34:06 +02:00
133aeb8374 v0.0.495
All checks were successful
Build Docker and Deploy / Run goext test-suite (push) Successful in 3m44s
2024-08-07 14:00:02 +02:00
b78a468632 v0.0.494 add tables to wpdf
Some checks failed
Build Docker and Deploy / Run goext test-suite (push) Has been cancelled
2024-08-07 13:57:29 +02:00
f1b4480e0f v0.0.493 fix panic in RegisterImage for very short images
All checks were successful
Build Docker and Deploy / Run goext test-suite (push) Successful in 5m22s
2024-08-07 09:22:37 +02:00
ffffe4bf24 v0.0.492
All checks were successful
Build Docker and Deploy / Run goext test-suite (push) Successful in 5m32s
2024-08-02 16:19:21 +02:00
413bf3c848 v0.0.491 small optimization in Paginate method
Some checks failed
Build Docker and Deploy / Run goext test-suite (push) Failing after 8m31s
2024-07-31 00:15:09 +02:00
646990b549 v0.0.490 documentation and extra-params in exerr
All checks were successful
Build Docker and Deploy / Run goext test-suite (push) Successful in 4m2s
2024-07-27 23:44:18 +02:00
e5818146a8 v0.0.489
All checks were successful
Build Docker and Deploy / Run goext test-suite (push) Successful in 3m54s
2024-07-23 14:21:03 +02:00
1310054121 v0.0.488 fix wpdf with 16bpp images
All checks were successful
Build Docker and Deploy / Run goext test-suite (push) Successful in 4m9s
2024-07-22 15:16:28 +02:00
49d423915c v0.0.487
All checks were successful
Build Docker and Deploy / Run goext test-suite (push) Successful in 2m54s
2024-07-18 17:45:56 +02:00
1962cb3c52 v0.0.486 add ginext -> CorsAllowHeader
All checks were successful
Build Docker and Deploy / Run goext test-suite (push) Successful in 3m51s
2024-07-18 17:29:18 +02:00
84f124dd4d v0.0.485
All checks were successful
Build Docker and Deploy / Run goext test-suite (push) Successful in 2m44s
2024-07-16 15:22:18 +02:00
ff8e066135 v0.0.484
Some checks failed
Build Docker and Deploy / Run goext test-suite (push) Failing after 1m58s
2024-07-16 15:16:56 +02:00
bc5c61e43d v0.0.483
All checks were successful
Build Docker and Deploy / Run goext test-suite (push) Successful in 3m37s
2024-07-16 15:08:37 +02:00
6ded615723 v0.0.482 mathext.Percentile
All checks were successful
Build Docker and Deploy / Run goext test-suite (push) Successful in 3m46s
2024-07-12 16:33:42 +02:00
abc8af525a v0.0.481
All checks were successful
Build Docker and Deploy / Run goext test-suite (push) Successful in 3m50s
2024-07-04 16:24:49 +02:00
19d943361b v0.0.480
All checks were successful
Build Docker and Deploy / Run goext test-suite (push) Successful in 1m20s
2024-07-02 11:32:22 +02:00
b464afae01 v0.0.479 AccessStruct
All checks were successful
Build Docker and Deploy / Run goext test-suite (push) Successful in 1m36s
2024-07-02 11:29:47 +02:00
56bc5e8285 v0.0.478
All checks were successful
Build Docker and Deploy / Run goext test-suite (push) Successful in 3m41s
2024-07-01 17:23:00 +02:00
cb95bb561c v0.0.477 add langext.StrWrap
All checks were successful
Build Docker and Deploy / Run goext test-suite (push) Successful in 3m4s
2024-06-29 15:36:39 +02:00
dff8941bd3 v0.0.476 Ãproperly close cursor in wmo
All checks were successful
Build Docker and Deploy / Run goext test-suite (push) Successful in 3m44s
2024-06-28 18:37:02 +02:00
78e1c33e30 v0.0.475 ArrGroupBy
All checks were successful
Build Docker and Deploy / Run goext test-suite (push) Successful in 3m36s
2024-06-16 17:14:21 +02:00
d2f2a0558a v0.0.474 Add ZeroLogger config field to exerr.Init to override used zerolog instance
All checks were successful
Build Docker and Deploy / Run goext test-suite (push) Successful in 1m27s
2024-06-14 23:18:58 +02:00
fc4bed4b9f v0.0.473 add ctx to wmo.FilterQuery|Sort|Pagination
All checks were successful
Build Docker and Deploy / Run goext test-suite (push) Successful in 1m23s
2024-06-14 17:24:59 +02:00
julian
94a7bf250d v0.0.472 changed gin engine initialization
All checks were successful
Build Docker and Deploy / Run goext test-suite (push) Successful in 3m34s
2024-06-14 14:56:41 +02:00
f6121a6961 v0.0.471 Revert "v0.0.470 Add GoextJsonMarshaller interface to call when marshalling json via gojson"
All checks were successful
Build Docker and Deploy / Run goext test-suite (push) Successful in 2m50s
2024-06-11 19:39:43 +02:00
7fc73f1e93 v0.0.470 Add GoextJsonMarshaller interface to call when marshalling json via gojson
All checks were successful
Build Docker and Deploy / Run goext test-suite (push) Successful in 2m51s
2024-06-11 19:34:48 +02:00
2504ef00a0 v0.0.469
All checks were successful
Build Docker and Deploy / Run goext test-suite (push) Successful in 3m26s
2024-06-11 12:10:49 +02:00
fc5803493c added DblPtrIfNotNil
All checks were successful
Build Docker and Deploy / Run goext test-suite (push) Successful in 1m31s
2024-06-05 17:53:57 +02:00
a9295bfabf added CoalesceDblPtr
All checks were successful
Build Docker and Deploy / Run goext test-suite (push) Successful in 1m37s
2024-06-05 15:10:31 +02:00
12fa53d848 v0.0.466 exerr.Wrap now inherits the Severity of the wrapped error
All checks were successful
Build Docker and Deploy / Run goext test-suite (push) Successful in 1m24s
2024-06-03 13:48:30 +02:00
d2bb362135 v0.0.465
All checks were successful
Build Docker and Deploy / Run goext test-suite (push) Successful in 3m39s
2024-06-03 09:39:57 +02:00
9dd81f6bd5 v0.0.464 improve ZeroLogErrTraces/ZeroLogAllTraces output for empty-message wrapped exerrs
All checks were successful
Build Docker and Deploy / Run goext test-suite (push) Successful in 4m36s
2024-06-01 02:40:48 +02:00
d2c04afcd5 v0.0.463 Fix SubtractYears
Some checks failed
Build Docker and Deploy / Run goext test-suite (push) Failing after 3m29s
2024-05-29 20:20:01 +02:00
62980e1489 v0.0.462
All checks were successful
Build Docker and Deploy / Run goext test-suite (push) Successful in 3m32s
2024-05-23 14:37:05 +02:00
59963adf74 v0.0.461
All checks were successful
Build Docker and Deploy / Run goext test-suite (push) Successful in 1m17s
2024-05-20 00:52:49 +02:00
194ea4ace5 v0.0.460
All checks were successful
Build Docker and Deploy / Run goext test-suite (push) Successful in 1m16s
2024-05-20 00:38:04 +02:00
73b80a66bc v0.0.459
All checks were successful
Build Docker and Deploy / Run goext test-suite (push) Successful in 1m16s
2024-05-20 00:20:31 +02:00
d8b2d01274 v0.0.458 revert 457 and fix ObjectFitImage
All checks were successful
Build Docker and Deploy / Run goext test-suite (push) Successful in 2m49s
2024-05-20 00:15:24 +02:00
bfa8457e95 v0.0.457 test
All checks were successful
Build Docker and Deploy / Run goext test-suite (push) Successful in 3m38s
2024-05-20 00:07:33 +02:00
70106733d9 v0.0.456
All checks were successful
Build Docker and Deploy / Run goext test-suite (push) Successful in 2m38s
2024-05-18 23:38:47 +02:00
ce7837b9ef v0.0.455 add proper json/bson marshalling to exerr [severity|type|category]
All checks were successful
Build Docker and Deploy / Run goext test-suite (push) Successful in 3m39s
2024-05-16 15:38:42 +02:00
d0d72167eb v0.0.454
All checks were successful
Build Docker and Deploy / Run goext test-suite (push) Successful in 1m17s
2024-05-14 15:10:27 +02:00
a55ee1a6ce v0.0.453
All checks were successful
Build Docker and Deploy / Run goext test-suite (push) Successful in 1m18s
2024-05-14 14:57:10 +02:00
dfc319573c v0.0.452
All checks were successful
Build Docker and Deploy / Run goext test-suite (push) Successful in 3m16s
2024-05-14 12:48:43 +02:00
246e555f3f v0.0.451 wpdf image processing
Some checks failed
Build Docker and Deploy / Run goext test-suite (push) Has been cancelled
2024-05-14 12:46:49 +02:00
c28bc086b2 v0.0.450 wpdf
Some checks failed
Build Docker and Deploy / Run goext test-suite (push) Failing after 3m33s
2024-05-14 11:52:56 +02:00
d44e971325 v0.0.449
All checks were successful
Build Docker and Deploy / Run goext test-suite (push) Successful in 1m15s
2024-05-12 16:51:52 +02:00
fe4cdc48af v0.0.448 wmo marshalHook
Some checks failed
Build Docker and Deploy / Run goext test-suite (push) Failing after 25s
2024-05-12 16:45:45 +02:00
631006a4e1 v0.0.447
All checks were successful
Build Docker and Deploy / Run goext test-suite (push) Successful in 3m18s
2024-05-10 21:33:01 +02:00
567ead8697 v0.0.446 syncMap.GetAndSetIfNotContains and CASMutex
Some checks failed
Build Docker and Deploy / Run goext test-suite (push) Has been cancelled
2024-05-10 21:31:36 +02:00
e4886b4a7d v0.0.445 added CtxData() and ExtendGinMeta/ExtendContextMeta to exerr
All checks were successful
Build Docker and Deploy / Run goext test-suite (push) Successful in 1m55s
2024-05-03 15:28:53 +02:00
dcb5d3d7cd v0.0.444 change gin values in exerr auto meta to not include dots in keys (fucks up mongo)
Some checks failed
Build Docker and Deploy / Run goext test-suite (push) Failing after 3m9s
2024-05-03 13:24:08 +02:00
15a639f85a v0.0.443 fix wmo.List with pageSize==0
Some checks failed
Build Docker and Deploy / Run goext test-suite (push) Failing after 3m9s
2024-05-03 11:56:29 +02:00
303bd04649 v0.0.442
All checks were successful
Build Docker and Deploy / Run goext test-suite (push) Successful in 2m0s
2024-04-29 17:24:10 +02:00
7bda674939 v0.0.441
Some checks failed
Build Docker and Deploy / Run goext test-suite (push) Failing after 2m29s
2024-04-29 17:19:55 +02:00
126d4fbd0b v0.0.440 improve exerr.toJson
Some checks failed
Build Docker and Deploy / Run goext test-suite (push) Failing after 2m54s
2024-04-29 16:03:58 +02:00
fed8bccaab v0.0.439
All checks were successful
Build Docker and Deploy / Run goext test-suite (push) Successful in 1m38s
2024-04-25 11:47:16 +02:00
47b6a6b508 v0.0.438
All checks were successful
Build Docker and Deploy / Run goext test-suite (push) Successful in 3m20s
2024-04-25 11:40:01 +02:00
764ce79a71 v0.0.437 properly handle $group in wmo extraModPipeline
Some checks failed
Build Docker and Deploy / Run goext test-suite (push) Failing after 3m23s
2024-04-23 16:12:17 +02:00
b876c64ba2 v0.0.436
All checks were successful
Build Docker and Deploy / Run goext test-suite (push) Successful in 1m56s
2024-04-18 14:09:26 +02:00
8d52b41f57 v0.0.435 add ConvertStructToMapOpt.MaxDepth
All checks were successful
Build Docker and Deploy / Run goext test-suite (push) Successful in 2m39s
2024-04-15 12:55:44 +02:00
f47e2a33fe v0.0.434
All checks were successful
Build Docker and Deploy / Run goext test-suite (push) Successful in 1m13s
2024-04-15 10:43:26 +02:00
9321938dad v0.0.433 fix exerr missing gindata when using ginext.Error and add config for Output logging to stderr
All checks were successful
Build Docker and Deploy / Run goext test-suite (push) Successful in 4m0s
2024-04-15 10:25:30 +02:00
3828d601a2 v0.0.432 better handling of unmarshall-able values in exerr meta
All checks were successful
Build Docker and Deploy / Run goext test-suite (push) Successful in 2m12s
2024-04-13 22:08:45 +02:00
2e713c808d v0.0.431
Some checks failed
Build Docker and Deploy / Run goext test-suite (push) Failing after 1m10s
2024-04-10 15:29:59 +02:00
6602f86b43 v0.0.430
Some checks failed
Build Docker and Deploy / Run goext test-suite (push) Has been cancelled
2024-04-10 15:27:41 +02:00
24d9f0fdc7 v0.0.429
All checks were successful
Build Docker and Deploy / Run goext test-suite (push) Successful in 3m23s
2024-04-08 16:33:44 +02:00
8446b2da22 v0.0.428
Some checks failed
Build Docker and Deploy / Run goext test-suite (push) Has been cancelled
2024-04-08 16:32:34 +02:00
758e5a67b5 v0.0.427
All checks were successful
Build Docker and Deploy / Run goext test-suite (push) Successful in 3m30s
2024-04-07 15:10:21 +02:00
678ddd7124 v0.0.426 fix JsonOpt
All checks were successful
Build Docker and Deploy / Run goext test-suite (push) Successful in 3m57s
2024-04-01 16:03:00 +02:00
36b71dfaf3 v0.0.425 ArrAppend
All checks were successful
Build Docker and Deploy / Run goext test-suite (push) Successful in 2m25s
2024-03-30 14:24:53 +01:00
9491b72b8d v0.0.424 timeext.SubtractYears
All checks were successful
Build Docker and Deploy / Run goext test-suite (push) Successful in 3m29s
2024-03-30 03:01:55 +01:00
6c4af4006b v0.0.423 fix createPaginationPipeline - different primary and secondary sort keys broke mongo ??!?? - it actually only sorted by the secondary condition (ignoring the primary?)
All checks were successful
Build Docker and Deploy / Run goext test-suite (push) Successful in 1m32s
2024-03-24 15:25:52 +01:00
8bf3a337cf v0.0.422
All checks were successful
Build Docker and Deploy / Run goext test-suite (push) Successful in 1m29s
2024-03-23 20:29:46 +01:00
16146494dc v0.0.421
Some checks failed
Build Docker and Deploy / Run goext test-suite (push) Failing after 32s
2024-03-23 20:28:51 +01:00
b0e443ad99 v0.0.420
Some checks failed
Build Docker and Deploy / Run goext test-suite (push) Failing after 36s
2024-03-23 18:01:41 +01:00
9955eacf96 v0.0.419 JsonOpt
Some checks failed
Build Docker and Deploy / Run goext test-suite (push) Failing after 39s
2024-03-23 17:49:56 +01:00
f0347a9435 v0.0.418 fix tests?
All checks were successful
Build Docker and Deploy / Run goext test-suite (push) Successful in 3m1s
2024-03-20 09:42:06 +01:00
7c869c65f3 v0.0.417 add GinWrapper.ForwardRequest
Some checks failed
Build Docker and Deploy / Run goext test-suite (push) Failing after 1m26s
2024-03-20 08:58:59 +01:00
14f39a9162 v0.0.416
Some checks failed
Build Docker and Deploy / Run goext test-suite (push) Failing after 1m43s
2024-03-18 11:19:01 +01:00
dcd106c1cd v0.0.415 add 'tagkey' to gojson.Decoder
All checks were successful
Build Docker and Deploy / Run goext test-suite (push) Successful in 3m45s
2024-03-18 10:42:00 +01:00
b704e2a362 v0.0.414 fix rfctime.Date bson marshalling for zero value
All checks were successful
Build Docker and Deploy / Run goext test-suite (push) Successful in 2m28s
2024-03-16 19:42:59 +01:00
6b4bd5a6f8 v0.0.413 fix tests
All checks were successful
Build Docker and Deploy / Run goext test-suite (push) Successful in 1m33s
2024-03-11 21:00:30 +01:00
6df4f5f2a1 v0.0.412 fix GenerateIDSpecs accepting nil for opt
Some checks failed
Build Docker and Deploy / Run goext test-suite (push) Failing after 1m32s
2024-03-11 20:58:06 +01:00
780905ba35 v0.0.411
All checks were successful
Build Docker and Deploy / Run goext test-suite (push) Successful in 1m33s
2024-03-11 20:43:37 +01:00
c679797765 v0.0.410 add ginext.SuppressGinLogs
Some checks failed
Build Docker and Deploy / Run goext test-suite (push) Has been cancelled
2024-03-11 20:42:12 +01:00
401aad9fa4 v0.0.409
All checks were successful
Build Docker and Deploy / Run goext test-suite (push) Successful in 1m41s
2024-03-11 17:05:10 +01:00
645113d553 v0.0.408
All checks were successful
Build Docker and Deploy / Run goext test-suite (push) Successful in 3m43s
2024-03-11 16:41:47 +01:00
4a33986b6a v0.0.407 sq.Iterate
Some checks failed
Build Docker and Deploy / Run goext test-suite (push) Has been cancelled
2024-03-11 16:40:41 +01:00
c1c8c64c76 v0.0.406 bf
All checks were successful
Build Docker and Deploy / Run goext test-suite (push) Successful in 1m34s
2024-03-10 16:44:21 +01:00
0927fdc4d7 v0.0.405
All checks were successful
Build Docker and Deploy / Run goext test-suite (push) Successful in 1m36s
2024-03-10 15:28:26 +01:00
102a280dda v0.0.404
All checks were successful
Build Docker and Deploy / Run goext test-suite (push) Successful in 1m37s
2024-03-10 15:25:30 +01:00
f13384d794 v0.0.403 bf
All checks were successful
Build Docker and Deploy / Run goext test-suite (push) Successful in 1m37s
2024-03-10 12:58:59 +01:00
409d6e108d v0.0.402 add PackageName() and TypeName() to enums_codegen
All checks were successful
Build Docker and Deploy / Run goext test-suite (push) Successful in 2m12s
2024-03-10 12:49:31 +01:00
ed53f297bd v0.0.401 bf
All checks were successful
Build Docker and Deploy / Run goext test-suite (push) Successful in 1m27s
2024-03-09 15:07:03 +01:00
42424f4bc2 v0.0.400 added CommentTrimmer and DBOptions to sq
All checks were successful
Build Docker and Deploy / Run goext test-suite (push) Successful in 2m16s
2024-03-09 14:59:32 +01:00
9e5b8c5277 v0.0.399 added sq.NewAutoDBTypeConverter
All checks were successful
Build Docker and Deploy / Run goext test-suite (push) Successful in 1m25s
2024-03-09 14:16:35 +01:00
9abe28c490 v0.0.398 added As* version to sort functions
All checks were successful
Build Docker and Deploy / Run goext test-suite (push) Successful in 3m13s
2024-03-09 13:36:06 +01:00
422bbd8593 v0.0.397 added wmo.IColl inetrface for tim
All checks were successful
Build Docker and Deploy / Run goext test-suite (push) Successful in 4m2s
2024-03-04 12:17:10 +01:00
3956675e04 v0.0.396 sq.ConverterRFCSecondsF64ToString
All checks were successful
Build Docker and Deploy / Run goext test-suite (push) Successful in 3m2s
2024-02-28 14:28:48 +01:00
10c3780b52 v0.0.395 MapMerge
All checks were successful
Build Docker and Deploy / Run goext test-suite (push) Successful in 3m14s
2024-02-27 13:42:06 +01:00
8edc067a3b v0.0.394
All checks were successful
Build Docker and Deploy / Run goext test-suite (push) Successful in 1m31s
2024-02-21 18:40:42 +01:00
1007f2c834 v0.0.393
All checks were successful
Build Docker and Deploy / Run goext test-suite (push) Successful in 3m19s
2024-02-21 18:33:18 +01:00
c25da03217 v0.0.392
Some checks failed
Build Docker and Deploy / Run goext test-suite (push) Has been cancelled
2024-02-21 18:32:06 +01:00
4b55dbaacf v0.0.391
All checks were successful
Build Docker and Deploy / Run goext test-suite (push) Successful in 1m36s
2024-02-21 16:18:04 +01:00
c399fa42ae v0.0.390
All checks were successful
Build Docker and Deploy / Run goext test-suite (push) Successful in 3m26s
2024-02-21 16:11:15 +01:00
9e586f7706 v0.0.389
Some checks failed
Build Docker and Deploy / Run goext test-suite (push) Has been cancelled
2024-02-21 16:10:28 +01:00
3cc8dccc63 v0.0.388 fix ginext.Use loosing absPath information
All checks were successful
Build Docker and Deploy / Run goext test-suite (push) Successful in 3m26s
2024-02-20 09:19:06 +01:00
7fedfbca81 v0.0.387 bf
All checks were successful
Build Docker and Deploy / Run goext test-suite (push) Successful in 3m24s
2024-02-12 18:17:49 +01:00
3c439ba428 v0.0.386 InsertAndQuerySingle
All checks were successful
Build Docker and Deploy / Run goext test-suite (push) Successful in 1m27s
2024-02-09 15:58:21 +01:00
ad24f6db44 v0.0.385 UpdateAndQuerySingle
All checks were successful
Build Docker and Deploy / Run goext test-suite (push) Successful in 1m29s
2024-02-09 15:40:09 +01:00
1869ff3d75 v0.0.384 QuerySingleOpt
All checks were successful
Build Docker and Deploy / Run goext test-suite (push) Successful in 1m28s
2024-02-09 15:20:46 +01:00
30ce8c4b60 v0.0.383 sq.InsertMultiple
All checks were successful
Build Docker and Deploy / Run goext test-suite (push) Successful in 2m15s
2024-02-09 15:17:51 +01:00
885bb53244 add BuildUrl test
All checks were successful
Build Docker and Deploy / Run goext test-suite (push) Successful in 2m59s
2024-02-09 12:27:18 +01:00
1c7dc1820a v0.0.382 add build url method
Some checks failed
Build Docker and Deploy / Run goext test-suite (push) Has been cancelled
2024-02-09 12:25:01 +01:00
7e16e799e4 v0.0.381
All checks were successful
Build Docker and Deploy / Run goext test-suite (push) Successful in 1m28s
2024-01-23 17:51:52 +01:00
890e16241d v0.0.380 exerr properly handle inf and nan floats
All checks were successful
Build Docker and Deploy / Run goext test-suite (push) Successful in 3m8s
2024-01-22 12:33:41 +01:00
b9d0348735 v0.0.379
All checks were successful
Build Docker and Deploy / Run goext test-suite (push) Successful in 2m57s
2024-01-19 17:30:20 +01:00
b9e9575b9b v0.0.378
All checks were successful
Build Docker and Deploy / Run goext test-suite (push) Successful in 3m18s
2024-01-16 15:04:10 +01:00
295a098eb4 v0.0.377 fix sq.ConverterBoolToBit
All checks were successful
Build Docker and Deploy / Run goext test-suite (push) Successful in 2m12s
2024-01-14 17:06:42 +01:00
b69a082bb1 v0.0.376
All checks were successful
Build Docker and Deploy / Run goext test-suite (push) Successful in 1m22s
2024-01-14 01:52:52 +01:00
a4a8c83d17 v0.0.375
All checks were successful
Build Docker and Deploy / Run goext test-suite (push) Successful in 1m25s
2024-01-14 01:50:48 +01:00
e952176bb0 v0.0.374 ppwgen
All checks were successful
Build Docker and Deploy / Run goext test-suite (push) Successful in 1m47s
2024-01-14 01:37:38 +01:00
d99adb203b v0.0.373 BuildInsertStatement
All checks were successful
Build Docker and Deploy / Run goext test-suite (push) Successful in 1m24s
2024-01-14 00:07:01 +01:00
f1f91f4cfa v0.0.372 sq.Paginate
All checks were successful
Build Docker and Deploy / Run goext test-suite (push) Successful in 1m25s
2024-01-13 21:36:47 +01:00
2afb265ea4 v0.0.371
All checks were successful
Build Docker and Deploy / Run goext test-suite (push) Successful in 1m26s
2024-01-13 14:19:19 +01:00
be24f7a190 v0.0.370 improve sq errors
All checks were successful
Build Docker and Deploy / Run goext test-suite (push) Successful in 1m36s
2024-01-13 14:10:25 +01:00
aae8a706e9 v0.0.369 autom. allow usage of existing converter for pointer-types (sq)
All checks were successful
Build Docker and Deploy / Run goext test-suite (push) Successful in 1m29s
2024-01-13 02:01:30 +01:00
7d64f18f54 v0.0.368
All checks were successful
Build Docker and Deploy / Run goext test-suite (push) Successful in 2m9s
2024-01-13 01:29:40 +01:00
d08b2e565a v0.0.367
All checks were successful
Build Docker and Deploy / Run goext test-suite (push) Successful in 1m32s
2024-01-12 18:40:29 +01:00
d29e84894d v0.0.366 ginext: set cookies
All checks were successful
Build Docker and Deploy / Run goext test-suite (push) Successful in 1m28s
2024-01-12 15:10:48 +01:00
617298c366 v0.0.365
All checks were successful
Build Docker and Deploy / Run goext test-suite (push) Successful in 1m24s
2024-01-09 18:23:46 +01:00
668f308565 v0.0.364 add ServerHTTP() to GinWrapper for integration testing
All checks were successful
Build Docker and Deploy / Run goext test-suite (push) Successful in 2m59s
2024-01-09 18:17:55 +01:00
240a8ed7aa v0.0.363 wmo.extraModPipeline as func
All checks were successful
Build Docker and Deploy / Run goext test-suite (push) Successful in 3m9s
2024-01-09 08:51:46 +01:00
70de8e8d04 v0.0.362
All checks were successful
Build Docker and Deploy / Run goext test-suite (push) Successful in 1m27s
2024-01-07 04:18:03 +01:00
d38fa60fbc v0.0.361 call exerrListener in ginext.Error
All checks were successful
Build Docker and Deploy / Run goext test-suite (push) Successful in 1m37s
2024-01-07 04:01:13 +01:00
5fba7e0e2f v0.0.360 bf
All checks were successful
Build Docker and Deploy / Run goext test-suite (push) Successful in 1m26s
2024-01-06 01:31:07 +01:00
8757643399 v0.0.359 fix tests
All checks were successful
Build Docker and Deploy / Run goext test-suite (push) Successful in 2m18s
2024-01-05 16:55:53 +01:00
42bd4cf58d v0.0.358
Some checks failed
Build Docker and Deploy / Run goext test-suite (push) Failing after 1m22s
2024-01-05 16:53:14 +01:00
413178e2d3 v0.0.357
All checks were successful
Build Docker and Deploy / Run goext test-suite (push) Successful in 1m4s
2024-01-05 10:59:06 +01:00
9264a2e99b v0.0.356 exerr.GetMeta
All checks were successful
Build Docker and Deploy / Run goext test-suite (push) Successful in 1m9s
2024-01-05 10:43:39 +01:00
2a0471fb3d v0.0.355 sq.Json
All checks were successful
Build Docker and Deploy / Run goext test-suite (push) Successful in 2m27s
2024-01-05 10:25:05 +01:00
1497c013f9 v0.0.354
All checks were successful
Build Docker and Deploy / Run goext test-suite (push) Successful in 3m13s
2024-01-05 07:21:43 +01:00
ef78b7467b v0.0.353 add scn.sendmessage
All checks were successful
Build Docker and Deploy / Run goext test-suite (push) Successful in 4m16s
2024-01-04 12:38:03 +01:00
0eda32b725 v0.0.352
All checks were successful
Build Docker and Deploy / Run goext test-suite (push) Successful in 1m1s
2023-12-29 19:29:36 +01:00
f9ccafb976 v0.0.351 sq value converter
All checks were successful
Build Docker and Deploy / Run goext test-suite (push) Successful in 2m30s
2023-12-29 19:25:36 +01:00
6e90239fef v0.0.350
All checks were successful
Build Docker and Deploy / Run goext test-suite (push) Successful in 1m2s
2023-12-29 18:06:45 +01:00
05580c384a v0.0.349
All checks were successful
Build Docker and Deploy / Run goext test-suite (push) Successful in 1m51s
2023-12-28 01:36:21 +01:00
3188b951fb v0.0.348 added listener and options to goext
All checks were successful
Build Docker and Deploy / Run goext test-suite (push) Successful in 2m35s
2023-12-27 20:29:37 +01:00
6b211d1443 fix tests
All checks were successful
Build Docker and Deploy / Run goext test-suite (push) Successful in 1m37s
2023-12-17 14:04:35 +01:00
b2b9b40792 v0.0.347
Some checks failed
Build Docker and Deploy / Run goext test-suite (push) Failing after 1m39s
2023-12-16 17:57:42 +01:00
2f915cb6c1 v0.0.346
Some checks failed
Build Docker and Deploy / Run goext test-suite (push) Failing after 1m56s
2023-12-13 16:22:15 +01:00
b2b93f570a v0.0.345
Some checks failed
Build Docker and Deploy / Run goext test-suite (push) Failing after 2m48s
2023-12-07 19:39:31 +01:00
8247fc4524 v0.0.344
Some checks failed
Build Docker and Deploy / Run goext test-suite (push) Failing after 1m11s
2023-12-07 19:36:21 +01:00
5dad44ad09 v0.0.343
Some checks failed
Build Docker and Deploy / Run goext test-suite (push) Failing after 1m2s
2023-12-07 18:29:17 +01:00
f042183433 v0.0.342 support json data in enum comment
Some checks failed
Build Docker and Deploy / Run goext test-suite (push) Failing after 1m29s
2023-12-07 17:57:06 +01:00
b0be93a7a0 v0.0.341
Some checks failed
Build Docker and Deploy / Run goext test-suite (push) Failing after 1m6s
2023-12-07 14:43:12 +01:00
1c143921e6 v0.0.340
Some checks failed
Build Docker and Deploy / Run goext test-suite (push) Has been cancelled
2023-12-07 14:42:25 +01:00
68e63a9cf6 v0.0.339
Some checks failed
Build Docker and Deploy / Run goext test-suite (push) Failing after 1m5s
2023-12-07 10:54:36 +01:00
c3162fec95 v0.0.338
Some checks failed
Build Docker and Deploy / Run goext test-suite (push) Failing after 59s
2023-12-05 19:50:24 +01:00
1124aa781a v0.0.337
Some checks failed
Build Docker and Deploy / Run goext test-suite (push) Failing after 1m6s
2023-12-05 19:45:35 +01:00
eef0e9f2aa v0.0.336
Some checks failed
Build Docker and Deploy / Run goext test-suite (push) Failing after 1m3s
2023-12-05 19:42:37 +01:00
af38b06d22 v0.0.335 added DescriptionMeta to enum codegen
Some checks failed
Build Docker and Deploy / Run goext test-suite (push) Failing after 58s
2023-12-05 19:38:03 +01:00
2fad6340c7 v0.0.334 allow dot in enum-value
Some checks failed
Build Docker and Deploy / Run goext test-suite (push) Failing after 1m3s
2023-12-05 19:23:27 +01:00
03aa0a2282 Merge branch 'feature/gmail_api'
Some checks failed
Build Docker and Deploy / Run goext test-suite (push) Failing after 1m23s
2023-12-04 13:56:18 +01:00
358c238f3d google mail API [[FIN]]
Some checks failed
Build Docker and Deploy / Run goext test-suite (push) Failing after 1m28s
2023-12-04 13:55:41 +01:00
d65ac8ba2b v0.0.329
All checks were successful
Build Docker and Deploy / Run goext test-suite (push) Successful in 1m2s
2023-12-02 13:38:17 +01:00
55d02b8c65 v0.0.328
All checks were successful
Build Docker and Deploy / Run goext test-suite (push) Successful in 59s
2023-12-02 13:35:18 +01:00
8a3965f666 v0.0.327
All checks were successful
Build Docker and Deploy / Run goext test-suite (push) Successful in 1m44s
2023-12-02 13:15:19 +01:00
4aa2f494b1 v0.0.326 ginext::WithSession
All checks were successful
Build Docker and Deploy / Run goext test-suite (push) Successful in 1m38s
2023-12-02 13:07:36 +01:00
8f13eb2f16 google mail API [[[WIP]]] 2023-12-01 18:33:04 +01:00
8f15d42173 v0.0.325
Some checks failed
Build Docker and Deploy / Run goext test-suite (push) Failing after 2m48s
2023-11-27 14:14:58 +01:00
07fa21dcca v0.0.324
All checks were successful
Build Docker and Deploy / Run goext test-suite (push) Successful in 2m29s
2023-11-25 15:48:28 +01:00
e657de7f78 v0.0.323 fix langext.IsNil for reflect.Array
All checks were successful
Build Docker and Deploy / Run goext test-suite (push) Successful in 1m35s
2023-11-16 17:15:44 +01:00
c534e998e8 v0.0.322 bf SecondsF64
All checks were successful
Build Docker and Deploy / Run goext test-suite (push) Successful in 1m1s
2023-11-14 16:31:05 +01:00
88642770c5 v0.0.321 Add .NoLog() to lowest-level query exerr.Wrap in wmo (otherwise we get error logs on stdout even if the callign method allows mongo.ErrNoDocuments)
All checks were successful
Build Docker and Deploy / Run goext test-suite (push) Successful in 1m4s
2023-11-14 16:00:14 +01:00
8528b5cb66 v0.0.320 bugfix sort
All checks were successful
Build Docker and Deploy / Run goext test-suite (push) Successful in 1m14s
2023-11-14 14:50:27 +01:00
5ba84bd8ee v0.0.319 fix error when findoneÃ+pipeline fails
All checks were successful
Build Docker and Deploy / Run goext test-suite (push) Successful in 1m15s
2023-11-13 16:45:00 +01:00
1260b2dc77 v0.0.318 add failure mail to testx.yml
All checks were successful
Build Docker and Deploy / Run goext test-suite (push) Successful in 1m9s
2023-11-13 15:34:58 +01:00
7d18b913c6 v0.0.317 try fix tests on pipeline
All checks were successful
Build Docker and Deploy / Run goext test-suite (push) Successful in 2m38s
2023-11-13 15:28:37 +01:00
d1f9069f2f v0.0.316 bugfix sorting
Some checks failed
Build Docker and Deploy / Run goext test-suite (push) Failing after 2m14s
2023-11-13 15:19:48 +01:00
fa6d73301e v0.0.315 atomic
Some checks failed
Build Docker and Deploy / Run goext test-suite (push) Failing after 1m26s
2023-11-12 03:10:55 +01:00
bfe62799d3 v0.0.314
Some checks failed
Build Docker and Deploy / Run goext test-suite (push) Failing after 1m22s
2023-11-10 13:37:55 +01:00
ede912eb7b v0.0.313
Some checks failed
Build Docker and Deploy / Run goext test-suite (push) Failing after 1m22s
2023-11-10 13:26:30 +01:00
ff8f128fe8 v0.0.312 improve exerr.RecursiveMessage()
Some checks failed
Build Docker and Deploy / Run goext test-suite (push) Failing after 1m23s
2023-11-10 10:16:31 +01:00
1971f1396f v0.0.311 BF
Some checks failed
Build Docker and Deploy / Run goext test-suite (push) Failing after 1m17s
2023-11-09 11:48:45 +01:00
bf6c184d12 v0.0.310 debug
Some checks failed
Build Docker and Deploy / Run goext test-suite (push) Failing after 1m18s
2023-11-09 11:40:48 +01:00
770f5c5c64 v0.0.309
Some checks failed
Build Docker and Deploy / Run goext test-suite (push) Failing after 1m17s
2023-11-09 10:17:29 +01:00
623c021689 v0.0.308
Some checks failed
Build Docker and Deploy / Run goext test-suite (push) Failing after 1m20s
2023-11-09 10:02:31 +01:00
afcc89bf9e v0.0.307
Some checks failed
Build Docker and Deploy / Run goext test-suite (push) Failing after 1m20s
2023-11-09 10:00:01 +01:00
1672e8f8fd v0.0.306
Some checks failed
Build Docker and Deploy / Run goext test-suite (push) Failing after 1m12s
2023-11-09 09:36:41 +01:00
398ed56d32 v0.0.305
Some checks failed
Build Docker and Deploy / Run goext test-suite (push) Failing after 1m21s
2023-11-09 09:35:56 +01:00
f3ecba3883 v0.0.304 add support for WithModifyingPipeline to wmo
Some checks failed
Build Docker and Deploy / Run goext test-suite (push) Failing after 1m22s
2023-11-09 09:26:46 +01:00
45031b05cf v0.0.303
Some checks failed
Build Docker and Deploy / Run goext test-suite (push) Failing after 1m35s
2023-11-08 19:01:15 +01:00
7413ea045d v0.0.302
Some checks failed
Build Docker and Deploy / Run goext test-suite (push) Failing after 1m41s
2023-11-08 18:53:02 +01:00
62c9a4e734 v0.0.301 pagination (page+limit) support in wmo
Some checks failed
Build Docker and Deploy / Run goext test-suite (push) Failing after 2m11s
2023-11-08 18:30:30 +01:00
3a8baaa6d9 v0.0.300 add custom unmarshal-hooks to wmo
All checks were successful
Build Docker and Deploy / Run goext test-suite (push) Successful in 1m45s
2023-11-04 18:55:44 +01:00
498785e213 v0.0.299 pctx.RawBody( *[]byte )
All checks were successful
Build Docker and Deploy / Run goext test-suite (push) Successful in 1m41s
2023-11-03 16:53:41 +01:00
678f95642c v0.0.298 use go/format instead of manual command invocation
All checks were successful
Build Docker and Deploy / Run goext test-suite (push) Successful in 1m6s
2023-11-01 04:20:08 +01:00
dacc97e2ce v0.0.297
All checks were successful
Build Docker and Deploy / Run goext test-suite (push) Successful in 1m1s
2023-11-01 00:31:51 +01:00
f8c0c0afa0 v0.0.296 add csid.generateIDFromSeed
All checks were successful
Build Docker and Deploy / Run goext test-suite (push) Successful in 1m4s
2023-11-01 00:29:58 +01:00
2fbd5cf965 v0.0.295 added generic base-conversion algorithm
All checks were successful
Build Docker and Deploy / Run goext test-suite (push) Successful in 1m4s
2023-11-01 00:23:17 +01:00
75f71fe3db v0.0.294 migrate bfcodegen to templates
All checks were successful
Build Docker and Deploy / Run goext test-suite (push) Successful in 58s
2023-10-31 22:58:28 +01:00
ab1a1ab6f6 v0.0.293 fix NPE
All checks were successful
Build Docker and Deploy / Run goext test-suite (push) Successful in 1m52s
2023-10-30 13:37:31 +01:00
19ee5019ef v0.0.292
All checks were successful
Build Docker and Deploy / Run goext test-suite (push) Successful in 3m11s
2023-10-30 10:14:38 +01:00
42b68507f2 v0.0.291
All checks were successful
Build Docker and Deploy / Run goext test-suite (push) Successful in 56s
2023-10-26 13:02:45 +02:00
9d0047a11e v0.0.290 csid
All checks were successful
Build Docker and Deploy / Run goext test-suite (push) Successful in 1m38s
2023-10-26 13:01:58 +02:00
06d81f1682 v0.0.289 fsext
All checks were successful
Build Docker and Deploy / Run goext test-suite (push) Successful in 2m31s
2023-10-26 11:29:08 +02:00
7b8ab03779 v0.0.288 default to recursive-error-msg in exerr.Error()
All checks were successful
Build Docker and Deploy / Run goext test-suite (push) Successful in 2m54s
2023-10-19 14:16:01 +02:00
07cbcf5a0a v0.0.287 fix bug in confext::ApplyEnvOverrides if a struct env key exists in the os.env
All checks were successful
Build Docker and Deploy / Run goext test-suite (push) Successful in 1m37s
2023-10-12 10:02:42 +02:00
da41ec3e84 run CICD tests without doker workaround
All checks were successful
Build Docker and Deploy / Run goext test-suite (push) Successful in 1m37s
2023-10-11 15:50:09 +02:00
592fae25af v0.0.286 allow spaces in enum-keys
All checks were successful
Build Docker and Deploy / Run goext test-suite (push) Successful in 1m14s
2023-10-11 11:27:18 +02:00
7968460fa2 v0.0.285
Some checks failed
Build Docker and Deploy / Run goext test-suite (push) Failing after 1m0s
2023-10-09 15:25:30 +02:00
b808c5727c v0.0.284
All checks were successful
Build Docker and Deploy / Run goext test-suite (push) Successful in 1m14s
2023-10-09 15:22:57 +02:00
796f7956b8 v0.0.283
Some checks failed
Build Docker and Deploy / Run goext test-suite (push) Failing after 1m5s
2023-10-09 15:17:22 +02:00
1e6b92d1d9 v0.0.282 ginext bugfix
Some checks failed
Build Docker and Deploy / Run goext test-suite (push) Failing after 48s
2023-10-09 09:23:40 +02:00
0b85fa5af9 v0.0.281 typo fix
Some checks failed
Build Docker and Deploy / Run goext test-suite (push) Failing after 50s
2023-10-09 09:04:07 +02:00
c3318cc1de v0.0.280 DYN-166 ginext jsonfilter middleware
All checks were successful
Build Docker and Deploy / Run goext test-suite (push) Successful in 51s
2023-10-09 09:02:37 +02:00
fbf4d7b915 v0.0.279 DYN-166 ginext jsonfilter middleware
Some checks failed
Build Docker and Deploy / Run goext test-suite (push) Failing after 1m12s
2023-10-09 08:55:22 +02:00
9cc0abf9e0 v0.0.278 DYN-166 bugfix jsonfilter
Some checks failed
Build Docker and Deploy / Run goext test-suite (push) Failing after 52s
2023-10-05 12:54:07 +02:00
7c40bcfd3c v0.0.277 DYN-166 json marshal filter in ginext Write
Some checks failed
Build Docker and Deploy / Run goext test-suite (push) Failing after 54s
2023-10-05 12:00:51 +02:00
05636a1e4d v0.0.276
All checks were successful
Build Docker and Deploy / Run goext test-suite (push) Successful in 1m12s
2023-10-05 10:59:20 +02:00
0f52b860ea DYN-166 add jsonfilter to json library
All checks were successful
Build Docker and Deploy / Run goext test-suite (push) Successful in 56s
2023-10-05 10:57:34 +02:00
b5cd116219 DYN-166 add jsonfilter to json library
All checks were successful
Build Docker and Deploy / Run goext test-suite (push) Successful in 47s
2023-10-05 10:45:09 +02:00
98486842ae v0.0.275 fix missing returns in (v MetaValue) ShortString
All checks were successful
Build Docker and Deploy / Run goext test-suite (push) Successful in 54s
2023-09-29 16:00:40 +02:00
7577a2dd47 v0.0.274 limit exerr log meta values (shortlog) to 240 chars
All checks were successful
Build Docker and Deploy / Run goext test-suite (push) Successful in 50s
2023-09-27 16:18:21 +02:00
08681756b6 v0.0.273 add stack to PanicWrappedErr
All checks were successful
Build Docker and Deploy / Run goext test-suite (push) Successful in 1m9s
2023-09-27 14:15:59 +02:00
64772d0474 v0.0.272 WMO: fix FindOneAndReplace not using FindOneAndReplace
Some checks failed
Build Docker and Deploy / Run goext test-suite (push) Failing after 50s
2023-09-26 14:41:15 +02:00
127764556e Merge branch 'master' of ssh://gogs.mikescher.com:8022/BlackForestBytes/goext 2023-09-26 14:41:06 +02:00
170f43d806 WMO: fix FindOneAndReplace not using FindOneAndReplace 2023-09-26 14:40:56 +02:00
9dffc41274 v0.0.271 return old value in AtomicBool::Set
Some checks failed
Build Docker and Deploy / Run goext test-suite (push) Failing after 53s
2023-09-26 14:32:45 +02:00
c63cf442f8 try to fix test 'cmdext:TestFailOnStderr'
All checks were successful
Build Docker and Deploy / Run goext test-suite (push) Successful in 39s
2023-09-25 18:04:56 +02:00
a2ba283632 v0.0.270 fix inversion of AssertDeepEqual
Some checks failed
Build Docker and Deploy / Run goext test-suite (push) Failing after 54s
2023-09-25 11:35:03 +02:00
4a1fb1ae18 fix inversion of AssertDeepEqual 2023-09-25 11:34:51 +02:00
a127b24e62 v0.0.269 add AssertSetDeepEqual
All checks were successful
Build Docker and Deploy / Run goext test-suite (push) Successful in 1m8s
2023-09-25 09:18:22 +02:00
69d6290376 add AssertSetDeepEqual 2023-09-25 09:18:07 +02:00
c08a739158 v0.0.268 added WeekStart() and WeekEnd()
All checks were successful
Build Docker and Deploy / Run goext test-suite (push) Successful in 50s
2023-09-21 16:29:23 +02:00
5f5f0e44f0 v0.0.267 fix AssertDeepEqual
Some checks failed
Build Docker and Deploy / Run goext test-suite (push) Failing after 50s
2023-09-21 14:15:02 +02:00
6e6797eac5 fix AssertDeepEqual 2023-09-21 14:14:51 +02:00
cd9406900a v0.0.266 fix tst showing wrong file:line
All checks were successful
Build Docker and Deploy / Run goext test-suite (push) Successful in 1m22s
2023-09-21 13:08:13 +02:00
6c81f7f6bc fix tst showing wrong file:line, add DeepEqual 2023-09-21 13:07:55 +02:00
d56a0235af v0.0.265 add ListWithCount
All checks were successful
Build Docker and Deploy / Run goext test-suite (push) Successful in 49s
2023-09-18 12:57:27 +02:00
de2ca763c1 add function for ListWithCount 2023-09-18 12:56:56 +02:00
da52bb5c90 v0.0.264 added Valid() to id-gen
All checks were successful
Build Docker and Deploy / Run goext test-suite (push) Successful in 49s
2023-09-18 11:46:17 +02:00
3d4afe7b25 v0.0.263 re-add checksum guard to id-generate
Some checks failed
Build Docker and Deploy / Run goext test-suite (push) Failing after 50s
2023-09-18 10:43:29 +02:00
f5766d639c v0.0.262 ignore _gen files in bfcodegen checksum-calc
All checks were successful
Build Docker and Deploy / Run goext test-suite (push) Successful in 46s
2023-09-18 10:42:43 +02:00
cdf2a6e76b v0.0.261 added id-generate.go
All checks were successful
Build Docker and Deploy / Run goext test-suite (push) Successful in 1m7s
2023-09-18 10:38:25 +02:00
6d7cfb86f8 v0.0.260 wmo: fix endless recursion in wmo reflection
All checks were successful
Build Docker and Deploy / Run goext test-suite (push) Successful in 50s
2023-09-12 11:40:39 +02:00
1e9d663ffe fix endless recursion in wmo reflection 2023-09-12 11:39:51 +02:00
5b8d7ebf87 v0.0.259 wmo: allow fields to pointers to structs
Some checks failed
Build Docker and Deploy / Run goext test-suite (push) Failing after 49s
2023-09-12 10:48:57 +02:00
11dc6d2640 use type instead of value for Reflection in Coll.initFields 2023-09-12 10:47:41 +02:00
29a3f73f15 v0.0.258
Some checks failed
Build Docker and Deploy / Run goext test-suite (push) Failing after 1m22s
2023-09-11 11:28:34 +02:00
98105642fc removed default sort 2023-09-11 11:28:26 +02:00
0fd5f3b417 v0.0.257 better handling if pagination is faulty in wmo list
All checks were successful
Build Docker and Deploy / Run goext test-suite (push) Successful in 1m2s
2023-09-05 15:01:55 +02:00
43cac4b3bb v0.0.256 bind header
All checks were successful
Build Docker and Deploy / Run goext test-suite (push) Successful in 1m4s
2023-08-28 10:44:38 +02:00
cd68af8e66 v0.0.255 tuples
Some checks failed
Build Docker and Deploy / Run goext test-suite (push) Failing after 2m43s
2023-08-24 09:47:32 +02:00
113d838876 v0.0.254 revert back to 0.0.250
All checks were successful
Build Docker and Deploy / Run goext test-suite (push) Successful in 1m14s
2023-08-22 10:49:57 +02:00
9e5bc0d3ea v0.0.253
Some checks failed
Build Docker and Deploy / Run goext test-suite (push) Failing after 1m23s
2023-08-22 10:36:35 +02:00
6d3bd13f61 v0.0.252
Some checks failed
Build Docker and Deploy / Run goext test-suite (push) Failing after 1m5s
2023-08-22 10:23:04 +02:00
b5ca475b3f v0.0.251 exerr.WithStackSkip
All checks were successful
Build Docker and Deploy / Run goext test-suite (push) Successful in 1m9s
2023-08-22 10:21:13 +02:00
a75b1291cb v0.0.250
Some checks failed
Build Docker and Deploy / Run goext test-suite (push) Failing after 51s
2023-08-21 15:34:27 +02:00
21cd1ee066 v0.0.249 better MDTAny json serialization
Some checks failed
Build Docker and Deploy / Run goext test-suite (push) Failing after 54s
2023-08-21 15:19:40 +02:00
ae43cbb623 v0.0.248 exerr in wmo package
Some checks failed
Build Docker and Deploy / Run goext test-suite (push) Failing after 55s
2023-08-21 15:08:35 +02:00
9b752a911c v0.0.247 -.-
Some checks failed
Build Docker and Deploy / Run goext test-suite (push) Failing after 1m12s
2023-08-21 14:23:44 +02:00
ec9ac26a4c v0.0.246 timeext
All checks were successful
Build Docker and Deploy / Run goext test-suite (push) Successful in 1m11s
2023-08-21 14:15:06 +02:00
39a0b73d56 v0.0.245
All checks were successful
Build Docker and Deploy / Run goext test-suite (push) Successful in 40s
2023-08-21 13:27:36 +02:00
2e2e15d4d2 v0.0.244
All checks were successful
Build Docker and Deploy / Run goext test-suite (push) Successful in 48s
2023-08-18 13:27:02 +02:00
0d16946aba v0.0.243
Some checks failed
Build Docker and Deploy / Run goext test-suite (push) Failing after 1m18s
2023-08-18 13:25:18 +02:00
14441c2378 Adde gitea workflow: tests
All checks were successful
Build Docker and Deploy / Run goext test-suite (push) Successful in 57s
2023-08-14 18:39:22 +02:00
f6bcdc9903 Merge remote-tracking branch 'origin/master' 2023-08-14 16:33:03 +02:00
a95053211c Fix tests 2023-08-14 16:32:39 +02:00
813ce71e3e v0.0.242 forgot to return something 2023-08-14 16:05:12 +02:00
56ae0cfc6c v0.0.241 join string array 2023-08-14 15:54:50 +02:00
202afc9068 v0.0.240 2023-08-14 15:36:12 +02:00
56094b3cb6 v0.0.239 pctx.WithTImeout 2023-08-11 16:32:34 +02:00
0da098e9f9 v0.0.238 2023-08-09 19:51:41 +02:00
f0881c9fd6 v0.0.237 parse application/x-www-form-urlencoded in ginext 2023-08-09 19:35:01 +02:00
029b408749 v0.0.236 cmdext.FailOnStdErr 2023-08-09 17:48:06 +02:00
84b2be3169 v0.0.235 added .Enum(..) to exerr 2023-08-09 14:40:16 +02:00
c872cecc67 v0.0.234 2023-08-09 10:39:14 +02:00
99cd92729e v0.0.233 IncludeMetaInGinOutput 2023-08-09 10:37:59 +02:00
ac416f7b69 v0.0.232 2023-08-08 18:01:00 +02:00
e10140e143 v0.0.231 2023-08-08 16:10:31 +02:00
e165f0f62f v0.0.230 2023-08-08 16:09:02 +02:00
655d4daad9 v0.0.229 2023-08-08 16:05:44 +02:00
87a004e577 v0.0.228 bf 2023-08-08 15:33:52 +02:00
376c6cab50 v0.0.227 error on duplicate exerr.ErrorType 2023-08-08 15:28:29 +02:00
4a3f25baa0 v0.0.226 2023-08-08 14:28:09 +02:00
aa33bc8df3 v0.0.225 2023-08-08 13:09:15 +02:00
96b3718375 v0.0.224 implement error.As(x) for exerr 2023-08-08 12:38:22 +02:00
5f9b55933b v0.0.223 2023-08-08 11:52:40 +02:00
74d42637e7 v0.0.222 forgot status code 2023-08-06 19:11:59 +02:00
0c05bcf29b v0.0.221 download file data 2023-08-06 19:10:31 +02:00
9136143f2f v0.0.220 add ginext.bufferBody 2023-08-03 09:09:27 +02:00
2f1b784dc2 v0.0.219 implement error.Is(*) for exerr 2023-07-28 15:42:12 +02:00
190584e0e6 v0.0.218 bf 2023-07-27 17:16:30 +02:00
b7003b9ec9 v0.0.217 2023-07-27 17:12:41 +02:00
4f871271e8 v0.0.216 2023-07-27 17:00:53 +02:00
91f4793678 v0.0.215 Add (ee *ExErr) ToAPIJson 2023-07-27 14:37:11 +02:00
3b30bb049e v0.0.214 reassign innerctx 2023-07-27 09:58:10 +02:00
f0c5b36ea9 v0.0.213 inject gin key value pairs into context 2023-07-27 09:46:06 +02:00
647ec64c3b v0.0.212 2023-07-26 10:44:26 +02:00
b5f9b6b638 v0.0.211 2023-07-26 10:40:42 +02:00
c7949febf2 v0.0.210 fix ginext route dump 2023-07-25 11:16:11 +02:00
15a4b2a713 v0.0.209 removed g context from err func 2023-07-25 10:56:03 +02:00
493c6ebae8 v0.0.208 remove context from err functions because its not used 2023-07-25 10:51:14 +02:00
fb847b03af v0.0.207 renamed APIError to Error 2023-07-25 10:47:00 +02:00
f826633e6e v0.0.206 2023-07-24 18:50:14 +02:00
edeae23bf1 v0.0.205 2023-07-24 18:47:48 +02:00
a038b86147 v0.0.204 2023-07-24 18:42:33 +02:00
ede0b99d3a v0.0.203 2023-07-24 18:38:04 +02:00
d04ce18eb0 v0.0.202 2023-07-24 18:34:56 +02:00
8ae9a0f107 v0.0.201 2023-07-24 18:22:36 +02:00
a259bb6dbc v0.0.200 2023-07-24 17:42:18 +02:00
adf32568ee v0.0.199 2023-07-24 17:23:38 +02:00
0cfa159cb1 v0.0.198 2023-07-24 14:16:02 +02:00
0ead99608a v0.0.197 2023-07-24 12:27:06 +02:00
7fe3e66cad v0.0.196 2023-07-24 11:47:47 +02:00
a73d7d1654 v0.0.195 2023-07-24 11:42:52 +02:00
bbd7a7bc2c v0.0.194 2023-07-24 11:40:47 +02:00
f5151eb214 v0.0.193 2023-07-24 11:38:57 +02:00
eefb9ac9f5 v0.0.192 2023-07-24 11:30:07 +02:00
468a7d212d v0.0.191 2023-07-24 11:18:25 +02:00
a4def75d06 v0.0.190 2023-07-24 11:16:57 +02:00
16c66ee28c v0.0.189 2023-07-24 11:11:15 +02:00
2e6ca48d22 v0.0.188 exerr MVP 2023-07-24 10:42:39 +02:00
b1d6509294 v0.0.187 forget to use function 2023-07-24 09:16:37 +02:00
e909d656d9 v0.0.186 convert array to interface arr 2023-07-24 09:13:19 +02:00
0971f60c30 v0.0.185 add Meta() to enums 2023-07-19 19:34:39 +02:00
d8270e53ed v0.0.184 re-add missing array methods from merge commit 56684b2c0b 2023-07-19 19:29:59 +02:00
1ee127937a v0.0.183 2023-07-19 19:24:58 +02:00
56684b2c0b exerr [WIP]
(cherry picked from commit c0443af63b)
2023-07-19 19:24:43 +02:00
1ea6695f82 v0.0.182 added optional commit message 2023-07-19 11:26:23 +02:00
5273ff7600 v0.0.181 2023-07-19 11:26:06 +02:00
caa69c3629 v0.0.180 Super Test 2023-07-19 11:24:11 +02:00
0ff5f0aa28 v0.0.179 TestTest 2023-07-19 11:22:25 +02:00
d5cb1e48ed v0.0.178 2023-07-19 11:20:35 +02:00
1c2d3f541f v0.0.177 2023-07-18 16:08:24 +02:00
ec62ad436f v0.0.176 2023-07-18 16:01:34 +02:00
8d0ef0f002 v0.0.175 2023-07-18 15:59:12 +02:00
d78550672e v0.0.174 2023-07-18 15:23:32 +02:00
1d629f6db8 v0.0.173 2023-07-18 15:12:06 +02:00
f7d291056d v0.0.172 2023-07-18 14:40:10 +02:00
710c257c64 v0.0.171 2023-07-18 13:34:54 +02:00
c320bb3d90 v0.0.170 2023-07-17 12:42:49 +02:00
2f01a1d50f v0.0.169 2023-07-05 19:27:49 +02:00
ffc57b7e89 v0.0.168 2023-07-05 19:27:15 +02:00
d88cd3c22b v0.0.167 2023-06-22 17:33:56 +02:00
ac5ad640bd v0.0.166 2023-06-22 15:07:06 +02:00
21d241f9b1 v0.0.163 2023-06-18 01:16:52 +02:00
2569c165f8 v0.0.162 2023-06-11 16:38:47 +02:00
ee262a94fb v0.0.161 2023-06-11 16:35:20 +02:00
7977c0e59c Added rfctime.Date type 2023-06-10 19:13:15 +02:00
ceff0161c6 v0.0.159 2023-06-10 18:35:56 +02:00
a30da61419 v0.0.158 2023-06-10 16:28:50 +02:00
b613b122e3 v0.0.157 2023-06-10 16:22:14 +02:00
d017530444 v0.0.156 2023-06-10 00:19:17 +02:00
8de83cc290 v0.0.155 2023-06-08 16:26:06 +02:00
603ec82b83 v0.0.154 2023-06-08 16:24:53 +02:00
93c4cf31a8 v0.0.153 2023-06-08 16:24:15 +02:00
dc2d8a9103 v0.0.152 2023-06-08 16:17:01 +02:00
6589e8d5cd v0.0.151 2023-06-07 17:57:03 +02:00
0006c6859d v0.0.150 2023-06-07 17:48:36 +02:00
827b3fc1b7 v0.0.149 2023-06-07 17:45:45 +02:00
f7dce4a102 v0.0.148 2023-06-07 17:22:38 +02:00
45d4fd7101 v0.0.147 2023-06-07 16:58:17 +02:00
c7df9d2264 v0.0.146 2023-06-07 12:59:15 +02:00
d0954bf133 v0.0.145 2023-06-07 12:45:48 +02:00
8affa81bb9 v0.0.144 2023-06-07 12:39:21 +02:00
fe9ebf0bab v0.0.143 2023-06-07 12:36:41 +02:00
a4b5f33d15 v0.0.142 2023-06-07 11:28:07 +02:00
e89e2c18f2 v0.0.141 2023-06-07 10:56:11 +02:00
b16d5152c7 v0.0.140 2023-06-07 10:42:56 +02:00
5fb2f8a312 v0.0.139 2023-06-06 21:40:34 +02:00
2ad820be8d v0.0.138 2023-06-06 21:33:49 +02:00
555096102a v0.0.137 2023-06-06 21:30:22 +02:00
d76d7b5cb9 v0.0.136 2023-06-06 21:26:12 +02:00
6622c9003d v0.0.135 2023-06-06 21:24:13 +02:00
b02e1d2e85 v0.0.134 2023-06-06 21:22:44 +02:00
c338d23070 v0.0.133 2023-06-06 21:18:40 +02:00
1fbae343a4 Fix RFC3339 serialization 2023-06-06 11:26:46 +02:00
31418bf0e6 v0.0.130 2023-06-05 13:30:32 +02:00
6d45f6f667 v0.0.129 2023-06-05 13:24:52 +02:00
f610a2202c v0.0.128 2023-06-02 09:44:31 +02:00
2807299d46 v0.0.127 2023-05-28 22:55:06 +02:00
e872dbccec v0.0.126 2023-05-28 19:53:30 +02:00
9daf71e2ed v0.0.125 2023-05-28 19:41:24 +02:00
fe278f7772 v0.0.124 2023-05-28 18:21:02 +02:00
8ebda6fb3a v0.0.123 2023-05-25 18:20:31 +02:00
b0d3ce8c1c v0.0.122 2023-05-24 22:01:29 +02:00
021465e524 v0.0.121 2023-05-24 21:55:21 +02:00
cf9c73aa4a v0.0.120 2023-05-24 21:42:10 +02:00
0652bf22dc v0.0.119 2023-05-24 21:32:00 +02:00
b196adffc7 v0.0.118 2023-05-09 11:33:01 +02:00
717065e62d v0.0.117 2023-05-09 09:57:05 +02:00
e7b2b040b2 v0.0.116 2023-05-05 18:22:15 +02:00
05d0f9e469 v0.0.115 2023-05-05 18:18:20 +02:00
ccd03e50c8 v0.0.114 2023-05-05 18:17:15 +02:00
1c77c2b8e8 v0.0.113 2023-05-05 18:05:58 +02:00
9f6f967299 v0.0.112 2023-05-05 18:00:25 +02:00
18c83f0f76 v0.0.111 2023-05-05 17:57:21 +02:00
a64f336e24 v0.0.110 2023-05-05 17:47:30 +02:00
14bbd205f8 v0.0.109 2023-05-05 15:04:08 +02:00
cecfb0d788 v0.0.108 2023-05-05 14:43:40 +02:00
a445e6f623 v0.0.107 2023-04-26 11:35:28 +02:00
0aa6310971 v0.0.106 2023-04-26 11:34:46 +02:00
2f66ab1cf0 v0.0.105 2023-04-23 19:31:48 +02:00
304e779470 v0.0.104 2023-04-23 14:54:23 +02:00
5e295d65c5 v0.0.103 2023-04-20 14:35:55 +02:00
ef3705937c gojson: added MarshalSafeCollections 2023-04-20 14:34:57 +02:00
d780c7965f added gojson as a go/json fork (tag go1.20.2) 2023-04-20 14:30:24 +02:00
c13db6802e v0.0.102 2023-04-13 14:40:07 +02:00
c5e23ab451 v0.0.101 2023-04-08 19:39:13 +02:00
c266d9204b v0.0.100 2023-04-04 17:10:38 +02:00
2550691e2e v0.0.99 2023-03-31 13:33:06 +02:00
ca24e1d5bf v0.0.98 2023-03-29 20:25:03 +02:00
b156052e6f v0.0.97 2023-03-29 19:53:53 +02:00
dda2418255 v0.0.96 2023-03-29 19:53:10 +02:00
8e40deae6a add git-pull to Makefile 2023-03-28 16:30:56 +02:00
289b9f47a2 v0.0.95 2023-03-28 16:29:16 +02:00
007c44df85 v0.0.94 2023-03-21 16:00:15 +01:00
a6252f0743 v0.0.93 2023-03-15 15:41:55 +01:00
86c01659d7 base58 2023-03-15 14:00:48 +01:00
62acddda5e v0.0.91 2023-03-11 14:38:19 +01:00
ee325f67fd v0.0.90 2023-03-09 14:51:53 +01:00
dba0cd229e v0.0.89 2023-03-07 10:43:30 +01:00
ec4dba173f v0.0.88 2023-02-16 13:27:34 +01:00
22ce2d26f3 v0.0.87 2023-02-16 13:22:15 +01:00
4fd768e573 v0.0.86 2023-02-14 17:18:58 +01:00
bf16a8165f v0.0.85 2023-02-14 16:25:45 +01:00
9f5612248a fix fd0 read error on long stdout output (scanner buffer was too small) 2023-02-13 01:41:33 +01:00
4a2b830252 added more tests to cmdrunner (reproduce another ?? cmdrunner bug...) 2023-02-09 16:49:33 +01:00
c492c80881 v0.0.83 2023-02-09 15:06:37 +01:00
26dd16d021 v0.0.82 2023-02-09 15:01:54 +01:00
b0b43de8ca v0.0.81 2023-02-09 11:27:49 +01:00
94f72e4ddf v0.0.80 2023-02-09 11:16:23 +01:00
df4388e6dc v0.0.79 2023-02-08 18:55:51 +01:00
fd33b43f31 v0.0.78 2023-02-03 01:05:36 +01:00
be4de07eb8 v0.0.77 2023-02-03 00:59:54 +01:00
36ed474bfe v0.0.76 2023-01-31 23:46:35 +01:00
fdc590c8c3 v0.0.75 2023-01-31 22:41:12 +01:00
1990e5d32d v0.0.74 2023-01-31 11:01:45 +01:00
72883cf6bd v0.0.73 2023-01-31 10:56:30 +01:00
ff08d5f180 v0.0.72 2023-01-30 19:55:55 +01:00
72d6b538f7 v0.0.71 2023-01-29 22:28:08 +01:00
48dd30fb94 v0.0.70 2023-01-29 22:07:28 +01:00
b7c5756f11 v0.0.69 2023-01-29 22:00:40 +01:00
2070a432a5 v0.0.68 2023-01-29 21:27:55 +01:00
34e6d1819d v0.0.67 2023-01-29 20:42:02 +01:00
87fa6021e4 v0.0.66 2023-01-29 06:02:58 +01:00
297d6c52a8 v0.0.65 2023-01-29 05:45:29 +01:00
b9c46947d2 v0.0.64 2023-01-29 01:10:14 +01:00
412277b3e0 v0.0.63 2023-01-28 22:29:45 +01:00
e46f8019ec v0.0.62 2023-01-28 22:29:21 +01:00
ae952b2166 v0.0.61 2023-01-28 22:28:20 +01:00
b24dba9a45 v0.0.60 2023-01-28 14:44:12 +01:00
cfbc20367d v0.0.59 2023-01-15 02:27:08 +01:00
e25912758e v0.0.58 2023-01-15 02:05:05 +01:00
e1ae77a9db v0.0.57 2023-01-15 01:56:40 +01:00
9d07b3955f v0.0.56 2023-01-13 16:05:39 +01:00
02be696c25 v0.0.55 2023-01-06 02:02:22 +01:00
ba07625b7c v0.0.54 2022-12-29 22:52:52 +01:00
aeded3fb37 v0.0.53 2022-12-24 03:11:09 +01:00
1a1cd6d0aa v0.0.52 2022-12-24 02:50:46 +01:00
64cc1342a0 v0.0.51 2022-12-24 01:14:58 +01:00
8431b6adf5 v0.0.50 2022-12-23 20:08:59 +01:00
24e923fe84 v0.0.49 2022-12-23 19:11:18 +01:00
10ddc7c190 v0.0.48 2022-12-23 14:47:16 +01:00
7f88a0726c v0.0.47 2022-12-23 10:11:01 +01:00
2224db8e85 v0.0.46 2022-12-22 15:59:12 +01:00
c60afc89bb v0.0.45 2022-12-22 15:55:32 +01:00
261 changed files with 37420 additions and 427 deletions

View File

@@ -0,0 +1,55 @@
# https://docs.gitea.com/next/usage/actions/quickstart
# https://docs.github.com/en/actions/using-workflows/workflow-syntax-for-github-actions
# https://docs.github.com/en/actions/learn-github-actions/contexts#github-context
name: Build Docker and Deploy
run-name: Build & Deploy ${{ gitea.ref }} on ${{ gitea.actor }}
on:
push:
branches:
- '*'
- '**'
jobs:
run_tests:
name: Run goext test-suite
runs-on: bfb-cicd-latest
steps:
- name: Check out code
uses: actions/checkout@v3
- name: Setup go
uses: actions/setup-go@v4
with:
go-version-file: '${{ gitea.workspace }}/go.mod'
- name: Setup packages
uses: awalsh128/cache-apt-pkgs-action@latest
with:
packages: curl python3
version: 1.0
- name: go version
run: go version
- name: Run tests
run: cd "${{ gitea.workspace }}" && make test
- name: Send failure mail
if: failure()
uses: dawidd6/action-send-mail@v3
with:
server_address: smtp.fastmail.com
server_port: 465
secure: true
username: ${{secrets.MAIL_USERNAME}}
password: ${{secrets.MAIL_PASSWORD}}
subject: Pipeline on '${{ gitea.repository }}' failed
to: ${{ steps.commiter_info.outputs.MAIL }}
from: Gitea Actions <gitea_actions@blackforestbytes.de>
body: "Go to https://gogs.blackforestbytes.com/${{ gitea.repository }}/actions"

2
.idea/.gitignore generated vendored
View File

@@ -6,3 +6,5 @@
# Datasource local storage ignored files # Datasource local storage ignored files
/dataSources/ /dataSources/
/dataSources.local.xml /dataSources.local.xml
# GitHub Copilot persisted chat sessions
/copilot/chatSessions

6
.idea/goext.iml generated
View File

@@ -1,6 +1,10 @@
<?xml version="1.0" encoding="UTF-8"?> <?xml version="1.0" encoding="UTF-8"?>
<module type="WEB_MODULE" version="4"> <module type="WEB_MODULE" version="4">
<component name="Go" enabled="true" /> <component name="Go" enabled="true">
<buildTags>
<option name="goVersion" value="1.19" />
</buildTags>
</component>
<component name="NewModuleRootManager"> <component name="NewModuleRootManager">
<content url="file://$MODULE_DIR$" /> <content url="file://$MODULE_DIR$" />
<orderEntry type="inheritedJdk" /> <orderEntry type="inheritedJdk" />

6
.idea/golinter.xml generated Normal file
View File

@@ -0,0 +1,6 @@
<?xml version="1.0" encoding="UTF-8"?>
<project version="4">
<component name="GoLinterSettings">
<option name="checkGoLinterExe" value="false" />
</component>
</project>

View File

@@ -1,6 +1,7 @@
<component name="InspectionProjectProfileManager"> <component name="InspectionProjectProfileManager">
<profile version="1.0"> <profile version="1.0">
<option name="myName" value="Project Default" /> <option name="myName" value="Project Default" />
<inspection_tool class="GoMixedReceiverTypes" enabled="false" level="WEAK WARNING" enabled_by_default="false" />
<inspection_tool class="LanguageDetectionInspection" enabled="false" level="WARNING" enabled_by_default="false" /> <inspection_tool class="LanguageDetectionInspection" enabled="false" level="WARNING" enabled_by_default="false" />
<inspection_tool class="SpellCheckingInspection" enabled="false" level="TYPO" enabled_by_default="false"> <inspection_tool class="SpellCheckingInspection" enabled="false" level="TYPO" enabled_by_default="false">
<option name="processCode" value="true" /> <option name="processCode" value="true" />

6
.idea/sqldialects.xml generated Normal file
View File

@@ -0,0 +1,6 @@
<?xml version="1.0" encoding="UTF-8"?>
<project version="4">
<component name="SqlDialectMappings">
<file url="file://$PROJECT_DIR$/sq/sq_test.go" dialect="SQLite" />
</component>
</project>

View File

@@ -3,7 +3,15 @@ run:
echo "This is a library - can't be run" && false echo "This is a library - can't be run" && false
test: test:
go test ./... # go test ./...
which gotestsum || go install gotest.tools/gotestsum@latest
gotestsum --format "testname" -- -tags="timetzdata sqlite_fts5 sqlite_foreign_keys" "./..."
test-in-docker:
tag="goext_temp_test_image:$(shell uuidgen | tr -d '-')"; \
docker build --tag $$tag . -f .gitea/workflows/Dockerfile_tests; \
docker run --rm $$tag; \
docker rmi $$tag
version: version:
_data/version.sh _data/version.sh

105
README.md
View File

@@ -5,4 +5,107 @@ A collection of general & useful library methods
This should not have any heavy dependencies (gin, mongo, etc) and add missing basic language features... This should not have any heavy dependencies (gin, mongo, etc) and add missing basic language features...
Potentially needs `export GOPRIVATE="gogs.mikescher.com"` Potentially needs `export GOPRIVATE="git.blackforestbytes.com"`
## Packages:
| Name | Maintainer | Description |
|-------------|------------|---------------------------------------------------------------------------------------------------------------|
| langext | Mike | General uttility/helper functions, (everything thats missing from go standard library) |
| mathext | Mike | Utility/Helper functions for math |
| cryptext | Mike | Utility/Helper functions for encryption |
| syncext | Mike | Utility/Helper funtions for multi-threading / mutex / channels |
| dataext | Mike | Various useful data structures |
| zipext | Mike | Utility for zip/gzip/tar etc |
| reflectext | Mike | Utility for golang reflection |
| fsext | Mike | Utility for filesytem access |
| ctxext | Mike | Utility for context.Context |
| | | |
| mongoext | Mike | Utility/Helper functions for mongodb (kinda abandoned) |
| cursortoken | Mike | MongoDB cursortoken implementation |
| pagination | Mike | Pagination implementation |
| | | |
| totpext | Mike | Implementation of TOTP (2-Factor-Auth) |
| termext | Mike | Utilities for terminals (mostly color output) |
| confext | Mike | Parses environment configuration into structs |
| cmdext | Mike | Runner for external commands/processes |
| | | |
| sq | Mike | Utility functions for sql based databases (primarily sqlite) |
| tst | Mike | Utility functions for unit tests |
| | | |
| rfctime | Mike | Classes for time seriallization, with different marshallign method for mongo and json |
| gojson | Mike | Same interface for marshalling/unmarshalling as go/json, except with proper serialization of null arrays/maps |
| | | |
| bfcodegen | Mike | Various codegen tools (run via go generate) |
| | | |
| rext | Mike | Regex Wrapper, wraps regexp with a better interface |
| wmo | Mike | Mongo Wrapper, wraps mongodb with a better interface |
| | | |
| scn | Mike | SimpleCloudNotifier |
| | | |
## Usage:
### exerr
- see **mongoext/builder.go** for full info
Short summary:
- An better error package with metadata, listener, api-output and error-traces
- Initialize with `exerr.Init()`
- *Never* return `err` direct, always use exerr.Wrap(err, "...") - add metadata where applicable
- at the end either Print(), Fatal() or Output() your error (print = stdout, fatal = panic, output = json API response)
- You can add listeners with exerr.RegisterListener(), and save the full errors to a db or smth
### wmo
- A typed wrapper around the official mongo-go-driver
- Use `wmo.W[...](...)` to wrap the collections and type-ify them
- The new collections have all the usual methods, but types
- Also they have List() and Paginate() methods for paginated listings (witehr with a cursortoken or page/limit)
- Register additional hooks with `WithDecodeFunc`, `WithUnmarshalHook`, `WithMarshalHook`, `WithModifyingPipeline`, `WithModifyingPipelineFunc`
- List(), Paginate(), etc support filter interfaces
- Rule(s) of thumb:
- filter the results in the filter interface
- sort the results in the sort function of the filter interface
- add joins ($lookup's) in the `WithModifyingPipelineFunc`/`WithModifyingPipeline`
#### ginext
- A wrapper around gin-gonic/gin
- create the gin engine with `ginext.NewEngine`
- Add routes with `engine.Routes()...`
- `.Use(..)` adds a middleware
- `.Group(..)` adds a group
- `.Get().Handle(..)` adds a handler
- Handler return values (in contract to ginext) - values implement the `ginext.HTTPResponse` interface
- Every handler starts with something like:
```go
func (handler Handler) CommunityMetricsValues(pctx ginext.PreContext) ginext.HTTPResponse {
type communityURI struct {
Version string `uri:"version"`
CommunityID models.CommunityID `uri:"cid"`
}
type body struct {
UserID models.UserID `json:"userID"`
EventID models.EventID `json:"eventID"`
}
var u uri
var b body
ctx, gctx, httpErr := pctx.URI(&u).Body(&b).Start() // can have more unmarshaller, like header, form, etc
if httpErr != nil {
return *httpErr
}
defer ctx.Cancel()
// do stuff
}
```
#### sq
- TODO (like mongoext for sqlite/sql databases)

9
TODO.md Normal file
View File

@@ -0,0 +1,9 @@
- cronext
- rfctime.HMSTimeOnly
- rfctime.NanoTimeOnly
- remove sqlx dependency from sq (unmaintained, and mostly superseeded by our own stuff?)
- Move DBLogger and DBPreprocessor to sq

View File

@@ -7,6 +7,29 @@ set -o pipefail # Return value of a pipeline is the value of the last (rightmos
IFS=$'\n\t' # Set $IFS to only newline and tab. IFS=$'\n\t' # Set $IFS to only newline and tab.
function black() { echo -e "\x1B[30m $1 \x1B[0m"; }
function red() { echo -e "\x1B[31m $1 \x1B[0m"; }
function green() { echo -e "\x1B[32m $1 \x1B[0m"; }
function yellow(){ echo -e "\x1B[33m $1 \x1B[0m"; }
function blue() { echo -e "\x1B[34m $1 \x1B[0m"; }
function purple(){ echo -e "\x1B[35m $1 \x1B[0m"; }
function cyan() { echo -e "\x1B[36m $1 \x1B[0m"; }
function white() { echo -e "\x1B[37m $1 \x1B[0m"; }
if [ "$( git rev-parse --abbrev-ref HEAD )" != "master" ]; then
>&2 red "[ERROR] Can only create versions of <master>"
exit 1
fi
echo ""
echo -n "Insert optional commit message: "
read commitMessage
echo ""
git pull --ff
go get -u ./...
curr_vers=$(git describe --tags --abbrev=0 | sed 's/v//g') curr_vers=$(git describe --tags --abbrev=0 | sed 's/v//g')
next_ver=$(echo "$curr_vers" | awk -F. -v OFS=. 'NF==1{print ++$NF}; NF>1{if(length($NF+1)>length($NF))$(NF-1)++; $NF=sprintf("%0*d", length($NF), ($NF+1)%(10^length($NF))); print}') next_ver=$(echo "$curr_vers" | awk -F. -v OFS=. 'NF==1{print ++$NF}; NF>1{if(length($NF+1)>length($NF))$(NF-1)++; $NF=sprintf("%0*d", length($NF), ($NF+1)%(10^length($NF))); print}')
@@ -16,9 +39,22 @@ echo "> Current Version: ${curr_vers}"
echo "> Next Version: ${next_ver}" echo "> Next Version: ${next_ver}"
echo "" echo ""
printf "package goext\n\nconst GoextVersion = \"%s\"\n\nconst GoextVersionTimestamp = \"%s\"\n" "${next_ver}" "$( date +"%Y-%m-%dT%H:%M:%S%z" )" > "goextVersion.go"
git add --verbose . git add --verbose .
git commit -a -m "v${next_ver}" msg="v${next_ver}"
if [[ "$commitMessage" != "" ]]; then
msg="${msg} ${commitMessage}"
fi
if [ $# -gt 0 ]; then
msg="$1"
fi
git commit -a -m "${msg}"
git tag "v${next_ver}" git tag "v${next_ver}"

Binary file not shown.

Binary file not shown.

197
bfcodegen/csid-generate.go Normal file
View File

@@ -0,0 +1,197 @@
package bfcodegen
import (
"bytes"
_ "embed"
"errors"
"fmt"
"go/format"
"git.blackforestbytes.com/BlackForestBytes/goext"
"git.blackforestbytes.com/BlackForestBytes/goext/cryptext"
"git.blackforestbytes.com/BlackForestBytes/goext/langext"
"git.blackforestbytes.com/BlackForestBytes/goext/rext"
"io"
"os"
"path"
"path/filepath"
"regexp"
"strings"
"text/template"
)
type CSIDDef struct {
File string
FileRelative string
Name string
Prefix string
}
type CSIDGenOptions struct {
DebugOutput *bool
}
var rexCSIDPackage = rext.W(regexp.MustCompile(`^package\s+(?P<name>[A-Za-z0-9_]+)\s*$`))
var rexCSIDDef = rext.W(regexp.MustCompile(`^\s*type\s+(?P<name>[A-Za-z0-9_]+)\s+string\s*//\s*(@csid:type)\s+\[(?P<prefix>[A-Z0-9]{3})].*$`))
var rexCSIDChecksumConst = rext.W(regexp.MustCompile(`const ChecksumCharsetIDGenerator = "(?P<cs>[A-Za-z0-9_]*)"`))
//go:embed csid-generate.template
var templateCSIDGenerateText string
func GenerateCharsetIDSpecs(sourceDir string, destFile string, opt CSIDGenOptions) error {
debugOutput := langext.Coalesce(opt.DebugOutput, false)
files, err := os.ReadDir(sourceDir)
if err != nil {
return err
}
oldChecksum := "N/A"
if _, err := os.Stat(destFile); !os.IsNotExist(err) {
content, err := os.ReadFile(destFile)
if err != nil {
return err
}
if m, ok := rexCSIDChecksumConst.MatchFirst(string(content)); ok {
oldChecksum = m.GroupByName("cs").Value()
}
}
files = langext.ArrFilter(files, func(v os.DirEntry) bool { return v.Name() != path.Base(destFile) })
files = langext.ArrFilter(files, func(v os.DirEntry) bool { return strings.HasSuffix(v.Name(), ".go") })
files = langext.ArrFilter(files, func(v os.DirEntry) bool { return !strings.HasSuffix(v.Name(), "_gen.go") })
langext.SortBy(files, func(v os.DirEntry) string { return v.Name() })
newChecksumStr := goext.GoextVersion
for _, f := range files {
content, err := os.ReadFile(path.Join(sourceDir, f.Name()))
if err != nil {
return err
}
newChecksumStr += "\n" + f.Name() + "\t" + cryptext.BytesSha256(content)
}
newChecksum := cryptext.BytesSha256([]byte(newChecksumStr))
if newChecksum != oldChecksum {
fmt.Printf("[CSIDGenerate] Checksum has changed ( %s -> %s ), will generate new file\n\n", oldChecksum, newChecksum)
} else {
fmt.Printf("[CSIDGenerate] Checksum unchanged ( %s ), nothing to do\n", oldChecksum)
return nil
}
allIDs := make([]CSIDDef, 0)
pkgname := ""
for _, f := range files {
if debugOutput {
fmt.Printf("========= %s =========\n\n", f.Name())
}
fileIDs, pn, err := processCSIDFile(sourceDir, path.Join(sourceDir, f.Name()), debugOutput)
if err != nil {
return err
}
if debugOutput {
fmt.Printf("\n")
}
allIDs = append(allIDs, fileIDs...)
if pn != "" {
pkgname = pn
}
}
if pkgname == "" {
return errors.New("no package name found in any file")
}
fdata, err := format.Source([]byte(fmtCSIDOutput(newChecksum, allIDs, pkgname)))
if err != nil {
return err
}
err = os.WriteFile(destFile, fdata, 0o755)
if err != nil {
return err
}
return nil
}
func processCSIDFile(basedir string, fn string, debugOutput bool) ([]CSIDDef, string, error) {
file, err := os.Open(fn)
if err != nil {
return nil, "", err
}
defer func() { _ = file.Close() }()
bin, err := io.ReadAll(file)
if err != nil {
return nil, "", err
}
lines := strings.Split(string(bin), "\n")
ids := make([]CSIDDef, 0)
pkgname := ""
for i, line := range lines {
if i == 0 && strings.HasPrefix(line, "// Code generated by") {
break
}
if match, ok := rexCSIDPackage.MatchFirst(line); i == 0 && ok {
pkgname = match.GroupByName("name").Value()
continue
}
if match, ok := rexCSIDDef.MatchFirst(line); ok {
rfp, err := filepath.Rel(basedir, fn)
if err != nil {
return nil, "", err
}
def := CSIDDef{
File: fn,
FileRelative: rfp,
Name: match.GroupByName("name").Value(),
Prefix: match.GroupByName("prefix").Value(),
}
if debugOutput {
fmt.Printf("Found ID definition { '%s' }\n", def.Name)
}
ids = append(ids, def)
}
}
return ids, pkgname, nil
}
func fmtCSIDOutput(cs string, ids []CSIDDef, pkgname string) string {
templ := template.Must(template.New("csid-generate").Parse(templateCSIDGenerateText))
buffer := bytes.Buffer{}
err := templ.Execute(&buffer, langext.H{
"PkgName": pkgname,
"Checksum": cs,
"GoextVersion": goext.GoextVersion,
"IDs": ids,
})
if err != nil {
panic(err)
}
return buffer.String()
}

View File

@@ -0,0 +1,194 @@
// Code generated by csid-generate.go DO NOT EDIT.
package {{.PkgName}}
import "crypto/rand"
import "crypto/sha256"
import "fmt"
import "github.com/go-playground/validator/v10"
import "github.com/rs/zerolog/log"
import "git.blackforestbytes.com/BlackForestBytes/goext/exerr"
import "git.blackforestbytes.com/BlackForestBytes/goext/langext"
import "git.blackforestbytes.com/BlackForestBytes/goext/rext"
import "math/big"
import "reflect"
import "regexp"
import "strings"
const ChecksumCharsetIDGenerator = "{{.Checksum}}" // GoExtVersion: {{.GoextVersion}}
const idlen = 24
const checklen = 1
const idCharset = "0123456789ABCDEFGHIJKLMNOPQRSTUVWXYZabcdefghijklmnopqrstuvwxyz"
const idCharsetLen = len(idCharset)
var charSetReverseMap = generateCharsetMap()
const ({{range .IDs}}
prefix{{.Name}} = "{{.Prefix}}" {{end}}
)
var ({{range .IDs}}
regex{{.Name}} = generateRegex(prefix{{.Name}}) {{end}}
)
func generateRegex(prefix string) rext.Regex {
return rext.W(regexp.MustCompile(fmt.Sprintf("^%s[%s]{%d}[%s]{%d}$", prefix, idCharset, idlen-len(prefix)-checklen, idCharset, checklen)))
}
func generateCharsetMap() []int {
result := make([]int, 128)
for i := 0; i < len(result); i++ {
result[i] = -1
}
for idx, chr := range idCharset {
result[int(chr)] = idx
}
return result
}
func generateID(prefix string) string {
k := ""
csMax := big.NewInt(int64(idCharsetLen))
checksum := 0
for i := 0; i < idlen-len(prefix)-checklen; i++ {
v, err := rand.Int(rand.Reader, csMax)
if err != nil {
panic(err)
}
v64 := v.Int64()
k += string(idCharset[v64])
checksum = (checksum + int(v64)) % (idCharsetLen)
}
checkstr := string(idCharset[checksum%idCharsetLen])
return prefix + k + checkstr
}
func generateIDFromSeed(prefix string, seed string) string {
h := sha256.New()
iddata := ""
for len(iddata) < idlen-len(prefix)-checklen {
h.Write([]byte(seed))
bs := h.Sum(nil)
iddata += langext.NewAnyBaseConverter(idCharset).Encode(bs)
}
checksum := 0
for i := 0; i < idlen-len(prefix)-checklen; i++ {
ichr := int(iddata[i])
checksum = (checksum + charSetReverseMap[ichr]) % (idCharsetLen)
}
checkstr := string(idCharset[checksum%idCharsetLen])
return prefix + iddata[:(idlen-len(prefix)-checklen)] + checkstr
}
func validateID(prefix string, value string) error {
if len(value) != idlen {
return exerr.New(exerr.TypeInvalidCSID, "id has the wrong length").Str("value", value).Build()
}
if !strings.HasPrefix(value, prefix) {
return exerr.New(exerr.TypeInvalidCSID, "id is missing the correct prefix").Str("value", value).Str("prefix", prefix).Build()
}
checksum := 0
for i := len(prefix); i < len(value)-checklen; i++ {
ichr := int(value[i])
if ichr < 0 || ichr >= len(charSetReverseMap) || charSetReverseMap[ichr] == -1 {
return exerr.New(exerr.TypeInvalidCSID, "id contains invalid characters").Str("value", value).Build()
}
checksum = (checksum + charSetReverseMap[ichr]) % (idCharsetLen)
}
checkstr := string(idCharset[checksum%idCharsetLen])
if !strings.HasSuffix(value, checkstr) {
return exerr.New(exerr.TypeInvalidCSID, "id checkstring is invalid").Str("value", value).Str("checkstr", checkstr).Build()
}
return nil
}
func getRawData(prefix string, value string) string {
if len(value) != idlen {
return ""
}
return value[len(prefix) : idlen-checklen]
}
func getCheckString(prefix string, value string) string {
if len(value) != idlen {
return ""
}
return value[idlen-checklen:]
}
func ValidateEntityID(vfl validator.FieldLevel) bool {
if !vfl.Field().CanInterface() {
log.Error().Msgf("Failed to validate EntityID (cannot interface ?!?)")
return false
}
ifvalue := vfl.Field().Interface()
if value1, ok := ifvalue.(EntityID); ok {
if vfl.Field().Type().Kind() == reflect.Pointer && langext.IsNil(value1) {
return true
}
if err := value1.Valid(); err != nil {
log.Debug().Msgf("Failed to validate EntityID '%s' (%s)", value1.String(), err.Error())
return false
} else {
return true
}
} else {
log.Error().Msgf("Failed to validate EntityID (wrong type: %T)", ifvalue)
return false
}
}
{{range .IDs}}
// ================================ {{.Name}} ({{.FileRelative}}) ================================
func New{{.Name}}() {{.Name}} {
return {{.Name}}(generateID(prefix{{.Name}}))
}
func (id {{.Name}}) Valid() error {
return validateID(prefix{{.Name}}, string(id))
}
func (i {{.Name}}) String() string {
return string(i)
}
func (i {{.Name}}) Prefix() string {
return prefix{{.Name}}
}
func (id {{.Name}}) Raw() string {
return getRawData(prefix{{.Name}}, string(id))
}
func (id {{.Name}}) CheckString() string {
return getCheckString(prefix{{.Name}}, string(id))
}
func (id {{.Name}}) IsZero() bool {
return id == ""
}
func (id {{.Name}}) Regex() rext.Regex {
return regex{{.Name}}
}
{{end}}

View File

@@ -0,0 +1,52 @@
package bfcodegen
import (
_ "embed"
"fmt"
"git.blackforestbytes.com/BlackForestBytes/goext/cmdext"
"git.blackforestbytes.com/BlackForestBytes/goext/langext"
"git.blackforestbytes.com/BlackForestBytes/goext/tst"
"os"
"path/filepath"
"testing"
"time"
)
//go:embed _test_example_1.tgz
var CSIDExampleModels1 []byte
func TestGenerateCSIDSpecs(t *testing.T) {
tmpFile := filepath.Join(t.TempDir(), langext.MustHexUUID()+".tgz")
tmpDir := filepath.Join(t.TempDir(), langext.MustHexUUID())
err := os.WriteFile(tmpFile, CSIDExampleModels1, 0o777)
tst.AssertNoErr(t, err)
t.Cleanup(func() { _ = os.Remove(tmpFile) })
err = os.Mkdir(tmpDir, 0o777)
tst.AssertNoErr(t, err)
t.Cleanup(func() { _ = os.RemoveAll(tmpFile) })
_, err = cmdext.Runner("tar").Arg("-xvzf").Arg(tmpFile).Arg("-C").Arg(tmpDir).FailOnExitCode().FailOnTimeout().Timeout(time.Minute).Run()
tst.AssertNoErr(t, err)
err = GenerateCharsetIDSpecs(tmpDir, tmpDir+"/csid_gen.go", CSIDGenOptions{DebugOutput: langext.PTrue})
tst.AssertNoErr(t, err)
err = GenerateCharsetIDSpecs(tmpDir, tmpDir+"/csid_gen.go", CSIDGenOptions{DebugOutput: langext.PTrue})
tst.AssertNoErr(t, err)
fmt.Println()
fmt.Println()
fmt.Println()
fmt.Println("=====================================================================================================")
fmt.Println(string(tst.Must(os.ReadFile(tmpDir + "/csid_gen.go"))(t)))
fmt.Println("=====================================================================================================")
fmt.Println()
fmt.Println()
fmt.Println()
}

373
bfcodegen/enum-generate.go Normal file
View File

@@ -0,0 +1,373 @@
package bfcodegen
import (
"bytes"
_ "embed"
"encoding/json"
"errors"
"fmt"
"git.blackforestbytes.com/BlackForestBytes/goext"
"git.blackforestbytes.com/BlackForestBytes/goext/cryptext"
"git.blackforestbytes.com/BlackForestBytes/goext/langext"
"git.blackforestbytes.com/BlackForestBytes/goext/rext"
"go/format"
"io"
"os"
"path"
"path/filepath"
"reflect"
"regexp"
"strings"
"text/template"
)
type EnumDefVal struct {
VarName string
Value string
Description *string
Data *map[string]any
RawComment *string
}
type EnumDef struct {
File string
FileRelative string
EnumTypeName string
Type string
Values []EnumDefVal
}
type EnumGenOptions struct {
DebugOutput *bool
GoFormat *bool
}
var rexEnumPackage = rext.W(regexp.MustCompile(`^package\s+(?P<name>[A-Za-z0-9_]+)\s*$`))
var rexEnumDef = rext.W(regexp.MustCompile(`^\s*type\s+(?P<name>[A-Za-z0-9_]+)\s+(?P<type>[A-Za-z0-9_]+)\s*//\s*(@enum:type).*$`))
var rexEnumValueDef = rext.W(regexp.MustCompile(`^\s*(?P<name>[A-Za-z0-9_]+)\s+(?P<type>[A-Za-z0-9_]+)\s*=\s*(?P<value>("[/@A-Za-z0-9_:\s\-.]*"|[0-9]+))\s*(//(?P<comm>.*))?.*$`))
var rexEnumChecksumConst = rext.W(regexp.MustCompile(`const ChecksumEnumGenerator = "(?P<cs>[A-Za-z0-9_]*)"`))
//go:embed enum-generate.template
var templateEnumGenerateText string
func GenerateEnumSpecs(sourceDir string, destFile string, opt EnumGenOptions) error {
oldChecksum := "N/A"
if _, err := os.Stat(destFile); !os.IsNotExist(err) {
content, err := os.ReadFile(destFile)
if err != nil {
return err
}
if m, ok := rexEnumChecksumConst.MatchFirst(string(content)); ok {
oldChecksum = m.GroupByName("cs").Value()
}
}
gocode, _, changed, err := _generateEnumSpecs(sourceDir, destFile, oldChecksum, langext.Coalesce(opt.GoFormat, true), langext.Coalesce(opt.DebugOutput, false))
if err != nil {
return err
}
if !changed {
return nil
}
err = os.WriteFile(destFile, []byte(gocode), 0o755)
if err != nil {
return err
}
return nil
}
func _generateEnumSpecs(sourceDir string, destFile string, oldChecksum string, gofmt bool, debugOutput bool) (string, string, bool, error) {
files, err := os.ReadDir(sourceDir)
if err != nil {
return "", "", false, err
}
files = langext.ArrFilter(files, func(v os.DirEntry) bool { return v.Name() != path.Base(destFile) })
files = langext.ArrFilter(files, func(v os.DirEntry) bool { return strings.HasSuffix(v.Name(), ".go") })
files = langext.ArrFilter(files, func(v os.DirEntry) bool { return !strings.HasSuffix(v.Name(), "_gen.go") })
langext.SortBy(files, func(v os.DirEntry) string { return v.Name() })
newChecksumStr := goext.GoextVersion
for _, f := range files {
content, err := os.ReadFile(path.Join(sourceDir, f.Name()))
if err != nil {
return "", "", false, err
}
newChecksumStr += "\n" + f.Name() + "\t" + cryptext.BytesSha256(content)
}
newChecksum := cryptext.BytesSha256([]byte(newChecksumStr))
if newChecksum != oldChecksum {
fmt.Printf("[EnumGenerate] Checksum has changed ( %s -> %s ), will generate new file\n\n", oldChecksum, newChecksum)
} else {
fmt.Printf("[EnumGenerate] Checksum unchanged ( %s ), nothing to do\n", oldChecksum)
return "", oldChecksum, false, nil
}
allEnums := make([]EnumDef, 0)
pkgname := ""
for _, f := range files {
if debugOutput {
fmt.Printf("========= %s =========\n\n", f.Name())
}
fileEnums, pn, err := processEnumFile(sourceDir, path.Join(sourceDir, f.Name()), debugOutput)
if err != nil {
return "", "", false, err
}
if debugOutput {
fmt.Printf("\n")
}
allEnums = append(allEnums, fileEnums...)
if pn != "" {
pkgname = pn
}
}
if pkgname == "" {
return "", "", false, errors.New("no package name found in any file")
}
rdata := fmtEnumOutput(newChecksum, allEnums, pkgname)
if !gofmt {
return rdata, newChecksum, true, nil
}
fdata, err := format.Source([]byte(rdata))
if err != nil {
return "", "", false, err
}
return string(fdata), newChecksum, true, nil
}
func processEnumFile(basedir string, fn string, debugOutput bool) ([]EnumDef, string, error) {
file, err := os.Open(fn)
if err != nil {
return nil, "", err
}
defer func() { _ = file.Close() }()
bin, err := io.ReadAll(file)
if err != nil {
return nil, "", err
}
lines := strings.Split(string(bin), "\n")
enums := make([]EnumDef, 0)
pkgname := ""
for i, line := range lines {
if i == 0 && strings.HasPrefix(line, "// Code generated by") {
break
}
if match, ok := rexEnumPackage.MatchFirst(line); i == 0 && ok {
pkgname = match.GroupByName("name").Value()
continue
}
if match, ok := rexEnumDef.MatchFirst(line); ok {
rfp, err := filepath.Rel(basedir, fn)
if err != nil {
return nil, "", err
}
def := EnumDef{
File: fn,
FileRelative: rfp,
EnumTypeName: match.GroupByName("name").Value(),
Type: match.GroupByName("type").Value(),
Values: make([]EnumDefVal, 0),
}
enums = append(enums, def)
if debugOutput {
fmt.Printf("Found enum definition { '%s' -> '%s' }\n", def.EnumTypeName, def.Type)
}
}
if match, ok := rexEnumValueDef.MatchFirst(line); ok {
typename := match.GroupByName("type").Value()
comment := match.GroupByNameOrEmpty("comm").ValueOrNil()
var descr *string = nil
var data *map[string]any = nil
if comment != nil {
comment = langext.Ptr(strings.TrimSpace(*comment))
if strings.HasPrefix(*comment, "{") {
if v, ok := tryParseDataComment(*comment); ok {
data = &v
if anyDataDescr, ok := v["description"]; ok {
if dataDescr, ok := anyDataDescr.(string); ok {
descr = &dataDescr
}
}
} else {
descr = comment
}
} else {
descr = comment
}
}
def := EnumDefVal{
VarName: match.GroupByName("name").Value(),
Value: match.GroupByName("value").Value(),
RawComment: comment,
Description: descr,
Data: data,
}
found := false
for i, v := range enums {
if v.EnumTypeName == typename {
enums[i].Values = append(enums[i].Values, def)
found = true
if debugOutput {
if def.Description != nil {
fmt.Printf("Found enum value [%s] for '%s' ('%s')\n", def.Value, def.VarName, *def.Description)
} else {
fmt.Printf("Found enum value [%s] for '%s'\n", def.Value, def.VarName)
}
}
break
}
}
if !found {
if debugOutput {
fmt.Printf("Found non-enum value [%s] for '%s' ( looks like enum value, but no matching @enum:type )\n", def.Value, def.VarName)
}
}
}
}
return enums, pkgname, nil
}
func tryParseDataComment(s string) (map[string]any, bool) {
r := make(map[string]any)
err := json.Unmarshal([]byte(s), &r)
if err != nil {
return nil, false
}
for _, v := range r {
rv := reflect.ValueOf(v)
if rv.Kind() == reflect.Ptr && rv.IsNil() {
continue
}
if rv.Kind() == reflect.Bool {
continue
}
if rv.Kind() == reflect.String {
continue
}
if rv.Kind() == reflect.Int64 {
continue
}
if rv.Kind() == reflect.Float64 {
continue
}
return nil, false
}
return r, true
}
func fmtEnumOutput(cs string, enums []EnumDef, pkgname string) string {
templ := template.New("enum-generate")
templ = templ.Funcs(template.FuncMap{
"boolToStr": func(b bool) string { return langext.Conditional(b, "true", "false") },
"deref": func(v *string) string { return *v },
"trimSpace": func(str string) string { return strings.TrimSpace(str) },
"hasStr": func(v EnumDef) bool { return v.Type == "string" },
"hasDescr": func(v EnumDef) bool {
return langext.ArrAll(v.Values, func(val EnumDefVal) bool { return val.Description != nil })
},
"hasData": func(v EnumDef) bool {
return len(v.Values) > 0 && langext.ArrAll(v.Values, func(val EnumDefVal) bool { return val.Data != nil })
},
"gostr": func(v any) string {
return fmt.Sprintf("%#+v", v)
},
"goobj": func(name string, v any) string {
return fmt.Sprintf("%#+v", v)
},
"godatakey": func(v string) string {
return strings.ToUpper(v[0:1]) + v[1:]
},
"godatavalue": func(v any) string {
return fmt.Sprintf("%#+v", v)
},
"godatatype": func(v any) string {
return fmt.Sprintf("%T", v)
},
"mapindex": func(v map[string]any, k string) any {
return v[k]
},
"generalDataKeys": func(v EnumDef) map[string]string {
r0 := make(map[string]int)
for _, eval := range v.Values {
for k := range *eval.Data {
if ctr, ok := r0[k]; ok {
r0[k] = ctr + 1
} else {
r0[k] = 1
}
}
}
r1 := langext.MapToArr(r0)
r2 := langext.ArrFilter(r1, func(p langext.MapEntry[string, int]) bool { return p.Value == len(v.Values) })
r3 := langext.ArrMap(r2, func(p langext.MapEntry[string, int]) string { return p.Key })
r4 := langext.ArrToKVMap(r3, func(p string) string { return p }, func(p string) string { return fmt.Sprintf("%T", (*v.Values[0].Data)[p]) })
return r4
},
})
templ = template.Must(templ.Parse(templateEnumGenerateText))
buffer := bytes.Buffer{}
err := templ.Execute(&buffer, langext.H{
"PkgName": pkgname,
"Checksum": cs,
"GoextVersion": goext.GoextVersion,
"Enums": enums,
})
if err != nil {
panic(err)
}
return buffer.String()
}

View File

@@ -0,0 +1,155 @@
// Code generated by enum-generate.go DO NOT EDIT.
package {{.PkgName}}
import "git.blackforestbytes.com/BlackForestBytes/goext/langext"
import "git.blackforestbytes.com/BlackForestBytes/goext/enums"
const ChecksumEnumGenerator = "{{.Checksum}}" // GoExtVersion: {{.GoextVersion}}
{{ $pkgname := .PkgName }}
{{range .Enums}}
{{ $hasStr := ( . | hasStr ) }}
{{ $hasDescr := ( . | hasDescr ) }}
{{ $hasData := ( . | hasData ) }}
// ================================ {{.EnumTypeName}} ================================
//
// File: {{.FileRelative}}
// StringEnum: {{$hasStr | boolToStr}}
// DescrEnum: {{$hasDescr | boolToStr}}
// DataEnum: {{$hasData | boolToStr}}
//
{{ $typename := .EnumTypeName }}
{{ $enumdef := . }}
var __{{.EnumTypeName}}Values = []{{.EnumTypeName}}{ {{range .Values}}
{{.VarName}}, {{end}}
}
{{if $hasDescr}}
var __{{.EnumTypeName}}Descriptions = map[{{.EnumTypeName}}]string{ {{range .Values}}
{{.VarName}}: {{.Description | deref | trimSpace | gostr}}, {{end}}
}
{{end}}
{{if $hasData}}
type {{ .EnumTypeName }}Data struct { {{ range $datakey, $datatype := ($enumdef | generalDataKeys) }}
{{ $datakey | godatakey }} {{ $datatype }} `json:"{{ $datakey }}"` {{ end }}
}
var __{{.EnumTypeName}}Data = map[{{.EnumTypeName}}]{{.EnumTypeName}}Data{ {{range .Values}} {{ $enumvalue := . }}
{{.VarName}}: {{ $typename }}Data{ {{ range $datakey, $datatype := $enumdef | generalDataKeys }}
{{ $datakey | godatakey }}: {{ (mapindex $enumvalue.Data $datakey) | godatavalue }}, {{ end }}
}, {{end}}
}
{{end}}
var __{{.EnumTypeName}}Varnames = map[{{.EnumTypeName}}]string{ {{range .Values}}
{{.VarName}}: "{{.VarName}}", {{end}}
}
func (e {{.EnumTypeName}}) Valid() bool {
return langext.InArray(e, __{{.EnumTypeName}}Values)
}
func (e {{.EnumTypeName}}) Values() []{{.EnumTypeName}} {
return __{{.EnumTypeName}}Values
}
func (e {{.EnumTypeName}}) ValuesAny() []any {
return langext.ArrCastToAny(__{{.EnumTypeName}}Values)
}
func (e {{.EnumTypeName}}) ValuesMeta() []enums.EnumMetaValue {
return {{.EnumTypeName}}ValuesMeta()
}
{{if $hasStr}}
func (e {{.EnumTypeName}}) String() string {
return string(e)
}
{{end}}
{{if $hasDescr}}
func (e {{.EnumTypeName}}) Description() string {
if d, ok := __{{.EnumTypeName}}Descriptions[e]; ok {
return d
}
return ""
}
{{end}}
{{if $hasData}}
func (e {{.EnumTypeName}}) Data() {{.EnumTypeName}}Data {
if d, ok := __{{.EnumTypeName}}Data[e]; ok {
return d
}
return {{.EnumTypeName}}Data{}
}
{{end}}
func (e {{.EnumTypeName}}) VarName() string {
if d, ok := __{{.EnumTypeName}}Varnames[e]; ok {
return d
}
return ""
}
func (e {{.EnumTypeName}}) TypeName() string {
return "{{$typename}}"
}
func (e {{.EnumTypeName}}) PackageName() string {
return "{{$pkgname }}"
}
func (e {{.EnumTypeName}}) Meta() enums.EnumMetaValue {
{{if $hasDescr}} return enums.EnumMetaValue{VarName: e.VarName(), Value: e, Description: langext.Ptr(e.Description())} {{else}} return enums.EnumMetaValue{VarName: e.VarName(), Value: e, Description: nil} {{end}}
}
{{if $hasDescr}}
func (e {{.EnumTypeName}}) DescriptionMeta() enums.EnumDescriptionMetaValue {
return enums.EnumDescriptionMetaValue{VarName: e.VarName(), Value: e, Description: e.Description()}
}
{{end}}
func Parse{{.EnumTypeName}}(vv string) ({{.EnumTypeName}}, bool) {
for _, ev := range __{{.EnumTypeName}}Values {
if string(ev) == vv {
return ev, true
}
}
return "", false
}
func {{.EnumTypeName}}Values() []{{.EnumTypeName}} {
return __{{.EnumTypeName}}Values
}
func {{.EnumTypeName}}ValuesMeta() []enums.EnumMetaValue {
return []enums.EnumMetaValue{ {{range .Values}}
{{.VarName}}.Meta(), {{end}}
}
}
{{if $hasDescr}}
func {{.EnumTypeName}}ValuesDescriptionMeta() []enums.EnumDescriptionMetaValue {
return []enums.EnumDescriptionMetaValue{ {{range .Values}}
{{.VarName}}.DescriptionMeta(), {{end}}
}
}
{{end}}
{{end}}
// ================================ ================= ================================
func AllPackageEnums() []enums.Enum {
return []enums.Enum{ {{range .Enums}}
{{ if gt (len .Values) 0 }} {{ $v := index .Values 0 }} {{ $v.VarName}}, {{end}} // {{ .EnumTypeName }} {{end}}
}
}

View File

@@ -0,0 +1,91 @@
package bfcodegen
import (
_ "embed"
"fmt"
"git.blackforestbytes.com/BlackForestBytes/goext/cmdext"
"git.blackforestbytes.com/BlackForestBytes/goext/langext"
"git.blackforestbytes.com/BlackForestBytes/goext/tst"
"os"
"path/filepath"
"testing"
"time"
)
//go:embed _test_example_1.tgz
var EnumExampleModels1 []byte
//go:embed _test_example_2.tgz
var EnumExampleModels2 []byte
func TestGenerateEnumSpecs(t *testing.T) {
tmpFile := filepath.Join(t.TempDir(), langext.MustHexUUID()+".tgz")
tmpDir := filepath.Join(t.TempDir(), langext.MustHexUUID())
err := os.WriteFile(tmpFile, EnumExampleModels1, 0o777)
tst.AssertNoErr(t, err)
t.Cleanup(func() { _ = os.Remove(tmpFile) })
err = os.Mkdir(tmpDir, 0o777)
tst.AssertNoErr(t, err)
t.Cleanup(func() { _ = os.RemoveAll(tmpFile) })
_, err = cmdext.Runner("tar").Arg("-xvzf").Arg(tmpFile).Arg("-C").Arg(tmpDir).FailOnExitCode().FailOnTimeout().Timeout(time.Minute).Run()
tst.AssertNoErr(t, err)
s1, cs1, _, err := _generateEnumSpecs(tmpDir, "", "N/A", true, true)
tst.AssertNoErr(t, err)
s2, cs2, _, err := _generateEnumSpecs(tmpDir, "", "N/A", true, true)
tst.AssertNoErr(t, err)
tst.AssertEqual(t, cs1, cs2)
tst.AssertEqual(t, s1, s2)
fmt.Println()
fmt.Println()
fmt.Println()
fmt.Println("=====================================================================================================")
fmt.Println(s1)
fmt.Println("=====================================================================================================")
fmt.Println()
fmt.Println()
fmt.Println()
}
func TestGenerateEnumSpecsData(t *testing.T) {
tmpFile := filepath.Join(t.TempDir(), langext.MustHexUUID()+".tgz")
tmpDir := filepath.Join(t.TempDir(), langext.MustHexUUID())
err := os.WriteFile(tmpFile, EnumExampleModels2, 0o777)
tst.AssertNoErr(t, err)
t.Cleanup(func() { _ = os.Remove(tmpFile) })
err = os.Mkdir(tmpDir, 0o777)
tst.AssertNoErr(t, err)
t.Cleanup(func() { _ = os.RemoveAll(tmpFile) })
_, err = cmdext.Runner("tar").Arg("-xvzf").Arg(tmpFile).Arg("-C").Arg(tmpDir).FailOnExitCode().FailOnTimeout().Timeout(time.Minute).Run()
tst.AssertNoErr(t, err)
s1, _, _, err := _generateEnumSpecs(tmpDir, "", "", true, true)
tst.AssertNoErr(t, err)
fmt.Println()
fmt.Println()
fmt.Println()
fmt.Println("=====================================================================================================")
fmt.Println(s1)
fmt.Println("=====================================================================================================")
fmt.Println()
fmt.Println()
fmt.Println()
}

198
bfcodegen/id-generate.go Normal file
View File

@@ -0,0 +1,198 @@
package bfcodegen
import (
"bytes"
_ "embed"
"errors"
"fmt"
"go/format"
"git.blackforestbytes.com/BlackForestBytes/goext"
"git.blackforestbytes.com/BlackForestBytes/goext/cryptext"
"git.blackforestbytes.com/BlackForestBytes/goext/langext"
"git.blackforestbytes.com/BlackForestBytes/goext/rext"
"io"
"os"
"path"
"path/filepath"
"regexp"
"strings"
"text/template"
)
type IDDef struct {
File string
FileRelative string
Name string
}
type IDGenOptions struct {
DebugOutput *bool
}
var rexIDPackage = rext.W(regexp.MustCompile(`^package\s+(?P<name>[A-Za-z0-9_]+)\s*$`))
var rexIDDef = rext.W(regexp.MustCompile(`^\s*type\s+(?P<name>[A-Za-z0-9_]+)\s+string\s*//\s*(@id:type).*$`))
var rexIDChecksumConst = rext.W(regexp.MustCompile(`const ChecksumIDGenerator = "(?P<cs>[A-Za-z0-9_]*)"`))
//go:embed id-generate.template
var templateIDGenerateText string
func GenerateIDSpecs(sourceDir string, destFile string, opt IDGenOptions) error {
debugOutput := langext.Coalesce(opt.DebugOutput, false)
files, err := os.ReadDir(sourceDir)
if err != nil {
return err
}
oldChecksum := "N/A"
if _, err := os.Stat(destFile); !os.IsNotExist(err) {
content, err := os.ReadFile(destFile)
if err != nil {
return err
}
if m, ok := rexIDChecksumConst.MatchFirst(string(content)); ok {
oldChecksum = m.GroupByName("cs").Value()
}
}
files = langext.ArrFilter(files, func(v os.DirEntry) bool { return v.Name() != path.Base(destFile) })
files = langext.ArrFilter(files, func(v os.DirEntry) bool { return strings.HasSuffix(v.Name(), ".go") })
files = langext.ArrFilter(files, func(v os.DirEntry) bool { return !strings.HasSuffix(v.Name(), "_gen.go") })
langext.SortBy(files, func(v os.DirEntry) string { return v.Name() })
newChecksumStr := goext.GoextVersion
for _, f := range files {
content, err := os.ReadFile(path.Join(sourceDir, f.Name()))
if err != nil {
return err
}
newChecksumStr += "\n" + f.Name() + "\t" + cryptext.BytesSha256(content)
}
newChecksum := cryptext.BytesSha256([]byte(newChecksumStr))
if newChecksum != oldChecksum {
fmt.Printf("[IDGenerate] Checksum has changed ( %s -> %s ), will generate new file\n\n", oldChecksum, newChecksum)
} else {
fmt.Printf("[IDGenerate] Checksum unchanged ( %s ), nothing to do\n", oldChecksum)
return nil
}
allIDs := make([]IDDef, 0)
pkgname := ""
for _, f := range files {
if debugOutput {
fmt.Printf("========= %s =========\n\n", f.Name())
}
fileIDs, pn, err := processIDFile(sourceDir, path.Join(sourceDir, f.Name()), debugOutput)
if err != nil {
return err
}
if debugOutput {
fmt.Printf("\n")
}
allIDs = append(allIDs, fileIDs...)
if pn != "" {
pkgname = pn
}
}
if pkgname == "" {
return errors.New("no package name found in any file")
}
fdata, err := format.Source([]byte(fmtIDOutput(newChecksum, allIDs, pkgname)))
if err != nil {
return err
}
err = os.WriteFile(destFile, fdata, 0o755)
if err != nil {
return err
}
return nil
}
func processIDFile(basedir string, fn string, debugOutput bool) ([]IDDef, string, error) {
file, err := os.Open(fn)
if err != nil {
return nil, "", err
}
defer func() { _ = file.Close() }()
bin, err := io.ReadAll(file)
if err != nil {
return nil, "", err
}
lines := strings.Split(string(bin), "\n")
ids := make([]IDDef, 0)
pkgname := ""
for i, line := range lines {
if i == 0 && strings.HasPrefix(line, "// Code generated by") {
break
}
if match, ok := rexIDPackage.MatchFirst(line); i == 0 && ok {
pkgname = match.GroupByName("name").Value()
continue
}
if match, ok := rexIDDef.MatchFirst(line); ok {
rfp, err := filepath.Rel(basedir, fn)
if err != nil {
return nil, "", err
}
def := IDDef{
File: fn,
FileRelative: rfp,
Name: match.GroupByName("name").Value(),
}
if debugOutput {
fmt.Printf("Found ID definition { '%s' }\n", def.Name)
}
ids = append(ids, def)
}
}
return ids, pkgname, nil
}
func fmtIDOutput(cs string, ids []IDDef, pkgname string) string {
templ := template.Must(template.New("id-generate").Parse(templateIDGenerateText))
buffer := bytes.Buffer{}
anyDef := langext.ArrFirstOrNil(ids, func(def IDDef) bool { return def.Name == "AnyID" || def.Name == "AnyId" })
err := templ.Execute(&buffer, langext.H{
"PkgName": pkgname,
"Checksum": cs,
"GoextVersion": goext.GoextVersion,
"IDs": ids,
"AnyDef": anyDef,
})
if err != nil {
panic(err)
}
return buffer.String()
}

View File

@@ -0,0 +1,56 @@
// Code generated by id-generate.go DO NOT EDIT.
package {{.PkgName}}
import "go.mongodb.org/mongo-driver/bson"
import "go.mongodb.org/mongo-driver/bson/bsontype"
import "go.mongodb.org/mongo-driver/bson/primitive"
import "git.blackforestbytes.com/BlackForestBytes/goext/exerr"
const ChecksumIDGenerator = "{{.Checksum}}" // GoExtVersion: {{.GoextVersion}}
{{range .IDs}}
// ================================ {{.Name}} ({{.FileRelative}}) ================================
func (i {{.Name}}) MarshalBSONValue() (bsontype.Type, []byte, error) {
if objId, err := primitive.ObjectIDFromHex(string(i)); err == nil {
return bson.MarshalValue(objId)
} else {
return 0, nil, exerr.New(exerr.TypeMarshalEntityID, "Failed to marshal {{.Name}}("+i.String()+") to ObjectId").Str("value", string(i)).Type("type", i).Build()
}
}
func (i {{.Name}}) String() string {
return string(i)
}
func (i {{.Name}}) ObjID() (primitive.ObjectID, error) {
return primitive.ObjectIDFromHex(string(i))
}
func (i {{.Name}}) Valid() bool {
_, err := primitive.ObjectIDFromHex(string(i))
return err == nil
}
{{if ne $.AnyDef nil}}
func (i {{.Name}}) AsAny() {{$.AnyDef.Name}} {
return {{$.AnyDef.Name}}(i)
}
func (i {{.Name}}) AsAnyPtr() *{{$.AnyDef.Name}} {
v := {{$.AnyDef.Name}}(i)
return &v
}
{{end}}
func (i {{.Name}}) IsZero() bool {
return i == ""
}
func New{{.Name}}() {{.Name}} {
return {{.Name}}(primitive.NewObjectID().Hex())
}
{{end}}

View File

@@ -0,0 +1,52 @@
package bfcodegen
import (
_ "embed"
"fmt"
"git.blackforestbytes.com/BlackForestBytes/goext/cmdext"
"git.blackforestbytes.com/BlackForestBytes/goext/langext"
"git.blackforestbytes.com/BlackForestBytes/goext/tst"
"os"
"path/filepath"
"testing"
"time"
)
//go:embed _test_example_1.tgz
var IDExampleModels1 []byte
func TestGenerateIDSpecs(t *testing.T) {
tmpFile := filepath.Join(t.TempDir(), langext.MustHexUUID()+".tgz")
tmpDir := filepath.Join(t.TempDir(), langext.MustHexUUID())
err := os.WriteFile(tmpFile, IDExampleModels1, 0o777)
tst.AssertNoErr(t, err)
t.Cleanup(func() { _ = os.Remove(tmpFile) })
err = os.Mkdir(tmpDir, 0o777)
tst.AssertNoErr(t, err)
t.Cleanup(func() { _ = os.RemoveAll(tmpFile) })
_, err = cmdext.Runner("tar").Arg("-xvzf").Arg(tmpFile).Arg("-C").Arg(tmpDir).FailOnExitCode().FailOnTimeout().Timeout(time.Minute).Run()
tst.AssertNoErr(t, err)
err = GenerateIDSpecs(tmpDir, tmpDir+"/id_gen.go", IDGenOptions{DebugOutput: langext.PTrue})
tst.AssertNoErr(t, err)
err = GenerateIDSpecs(tmpDir, tmpDir+"/id_gen.go", IDGenOptions{DebugOutput: langext.PTrue})
tst.AssertNoErr(t, err)
fmt.Println()
fmt.Println()
fmt.Println()
fmt.Println("=====================================================================================================")
fmt.Println(string(tst.Must(os.ReadFile(tmpDir + "/id_gen.go"))(t)))
fmt.Println("=====================================================================================================")
fmt.Println()
fmt.Println()
fmt.Println()
}

100
cmdext/builder.go Normal file
View File

@@ -0,0 +1,100 @@
package cmdext
import (
"fmt"
"git.blackforestbytes.com/BlackForestBytes/goext/langext"
"time"
)
type CommandRunner struct {
program string
args []string
timeout *time.Duration
env []string
listener []CommandListener
enforceExitCodes *[]int
enforceNoTimeout bool
enforceNoStderr bool
}
func Runner(program string) *CommandRunner {
return &CommandRunner{
program: program,
args: make([]string, 0),
timeout: nil,
env: make([]string, 0),
listener: make([]CommandListener, 0),
enforceExitCodes: nil,
enforceNoTimeout: false,
enforceNoStderr: false,
}
}
func (r *CommandRunner) Arg(arg string) *CommandRunner {
r.args = append(r.args, arg)
return r
}
func (r *CommandRunner) Args(arg []string) *CommandRunner {
r.args = append(r.args, arg...)
return r
}
func (r *CommandRunner) Timeout(timeout time.Duration) *CommandRunner {
r.timeout = &timeout
return r
}
func (r *CommandRunner) Env(key, value string) *CommandRunner {
r.env = append(r.env, fmt.Sprintf("%s=%s", key, value))
return r
}
func (r *CommandRunner) RawEnv(env string) *CommandRunner {
r.env = append(r.env, env)
return r
}
func (r *CommandRunner) Envs(env []string) *CommandRunner {
r.env = append(r.env, env...)
return r
}
func (r *CommandRunner) EnsureExitcode(arg ...int) *CommandRunner {
r.enforceExitCodes = langext.Ptr(langext.ForceArray(arg))
return r
}
func (r *CommandRunner) FailOnExitCode() *CommandRunner {
r.enforceExitCodes = langext.Ptr([]int{0})
return r
}
func (r *CommandRunner) FailOnTimeout() *CommandRunner {
r.enforceNoTimeout = true
return r
}
func (r *CommandRunner) FailOnStderr() *CommandRunner {
r.enforceNoStderr = true
return r
}
func (r *CommandRunner) Listen(lstr CommandListener) *CommandRunner {
r.listener = append(r.listener, lstr)
return r
}
func (r *CommandRunner) ListenStdout(lstr func(string)) *CommandRunner {
r.listener = append(r.listener, genericCommandListener{_readStdoutLine: &lstr})
return r
}
func (r *CommandRunner) ListenStderr(lstr func(string)) *CommandRunner {
r.listener = append(r.listener, genericCommandListener{_readStderrLine: &lstr})
return r
}
func (r *CommandRunner) Run() (CommandResult, error) {
return run(*r)
}

196
cmdext/cmdrunner.go Normal file
View File

@@ -0,0 +1,196 @@
package cmdext
import (
"errors"
"git.blackforestbytes.com/BlackForestBytes/goext/langext"
"git.blackforestbytes.com/BlackForestBytes/goext/mathext"
"git.blackforestbytes.com/BlackForestBytes/goext/syncext"
"os/exec"
"time"
)
var ErrExitCode = errors.New("process exited with an unexpected exitcode")
var ErrTimeout = errors.New("process did not exit after the specified timeout")
var ErrStderrPrint = errors.New("process did print to stderr stream")
type CommandResult struct {
StdOut string
StdErr string
StdCombined string
ExitCode int
CommandTimedOut bool
}
func run(opt CommandRunner) (CommandResult, error) {
cmd := exec.Command(opt.program, opt.args...)
cmd.Env = append(cmd.Env, opt.env...)
stdoutPipe, err := cmd.StdoutPipe()
if err != nil {
return CommandResult{}, err
}
stderrPipe, err := cmd.StderrPipe()
if err != nil {
return CommandResult{}, err
}
preader := pipeReader{
lineBufferSize: langext.Ptr(128 * 1024 * 1024), // 128MB max size of a single line, is hopefully enough....
stdout: stdoutPipe,
stderr: stderrPipe,
}
err = cmd.Start()
if err != nil {
return CommandResult{}, err
}
type resultObj struct {
stdout string
stderr string
stdcombined string
err error
}
stderrFailChan := make(chan bool)
outputChan := make(chan resultObj)
go func() {
// we need to first fully read the pipes and then call Wait
// see https://pkg.go.dev/os/exec#Cmd.StdoutPipe
listener := make([]CommandListener, 0)
listener = append(listener, opt.listener...)
if opt.enforceNoStderr {
listener = append(listener, genericCommandListener{
_readRawStderr: langext.Ptr(func(v []byte) {
if len(v) > 0 {
stderrFailChan <- true
}
}),
})
}
stdout, stderr, stdcombined, err := preader.Read(listener)
if err != nil {
outputChan <- resultObj{stdout, stderr, stdcombined, err}
_ = cmd.Process.Kill()
return
}
err = cmd.Wait()
if err != nil {
outputChan <- resultObj{stdout, stderr, stdcombined, err}
} else {
outputChan <- resultObj{stdout, stderr, stdcombined, nil}
}
}()
var timeoutChan <-chan time.Time = make(chan time.Time, 1)
if opt.timeout != nil {
timeoutChan = time.After(*opt.timeout)
}
select {
case <-timeoutChan:
_ = cmd.Process.Kill()
for _, lstr := range opt.listener {
lstr.Timeout()
}
if fallback, ok := syncext.ReadChannelWithTimeout(outputChan, mathext.Min(32*time.Millisecond, *opt.timeout)); ok {
// most of the time the cmd.Process.Kill() should also ahve finished the pipereader
// and we can at least return the already collected stdout, stderr, etc
res := CommandResult{
StdOut: fallback.stdout,
StdErr: fallback.stderr,
StdCombined: fallback.stdcombined,
ExitCode: -1,
CommandTimedOut: true,
}
if opt.enforceNoTimeout {
return res, ErrTimeout
}
return res, nil
} else {
res := CommandResult{
StdOut: "",
StdErr: "",
StdCombined: "",
ExitCode: -1,
CommandTimedOut: true,
}
if opt.enforceNoTimeout {
return res, ErrTimeout
}
return res, nil
}
case <-stderrFailChan:
_ = cmd.Process.Kill()
if fallback, ok := syncext.ReadChannelWithTimeout(outputChan, 32*time.Millisecond); ok {
// most of the time the cmd.Process.Kill() should also have finished the pipereader
// and we can at least return the already collected stdout, stderr, etc
res := CommandResult{
StdOut: fallback.stdout,
StdErr: fallback.stderr,
StdCombined: fallback.stdcombined,
ExitCode: -1,
CommandTimedOut: false,
}
return res, ErrStderrPrint
} else {
res := CommandResult{
StdOut: "",
StdErr: "",
StdCombined: "",
ExitCode: -1,
CommandTimedOut: false,
}
return res, ErrStderrPrint
}
case outobj := <-outputChan:
var exiterr *exec.ExitError
if errors.As(outobj.err, &exiterr) {
excode := exiterr.ExitCode()
for _, lstr := range opt.listener {
lstr.Finished(excode)
}
res := CommandResult{
StdOut: outobj.stdout,
StdErr: outobj.stderr,
StdCombined: outobj.stdcombined,
ExitCode: excode,
CommandTimedOut: false,
}
if opt.enforceExitCodes != nil && !langext.InArray(excode, *opt.enforceExitCodes) {
return res, ErrExitCode
}
return res, nil
} else if err != nil {
return CommandResult{}, err
} else {
for _, lstr := range opt.listener {
lstr.Finished(0)
}
res := CommandResult{
StdOut: outobj.stdout,
StdErr: outobj.stderr,
StdCombined: outobj.stdcombined,
ExitCode: 0,
CommandTimedOut: false,
}
if opt.enforceExitCodes != nil && !langext.InArray(0, *opt.enforceExitCodes) {
return res, ErrExitCode
}
return res, nil
}
}
}

348
cmdext/cmdrunner_test.go Normal file
View File

@@ -0,0 +1,348 @@
package cmdext
import (
"errors"
"fmt"
"testing"
"time"
)
func TestStdout(t *testing.T) {
res1, err := Runner("printf").Arg("hello").Run()
if err != nil {
t.Errorf("%v", err)
}
if res1.CommandTimedOut {
t.Errorf("Timeout")
}
if res1.ExitCode != 0 {
t.Errorf("res1.ExitCode == %v", res1.ExitCode)
}
if res1.StdErr != "" {
t.Errorf("res1.StdErr == '%v'", res1.StdErr)
}
if res1.StdOut != "hello" {
t.Errorf("res1.StdOut == '%v'", res1.StdOut)
}
if res1.StdCombined != "hello\n" {
t.Errorf("res1.StdCombined == '%v'", res1.StdCombined)
}
}
func TestStderr(t *testing.T) {
res1, err := Runner("python3").Arg("-c").Arg("import sys; print(\"error\", file=sys.stderr, end='')").Run()
if err != nil {
t.Errorf("%v", err)
}
if res1.CommandTimedOut {
t.Errorf("Timeout")
}
if res1.ExitCode != 0 {
t.Errorf("res1.ExitCode == %v", res1.ExitCode)
}
if res1.StdErr != "error" {
t.Errorf("res1.StdErr == '%v'", res1.StdErr)
}
if res1.StdOut != "" {
t.Errorf("res1.StdOut == '%v'", res1.StdOut)
}
if res1.StdCombined != "error\n" {
t.Errorf("res1.StdCombined == '%v'", res1.StdCombined)
}
}
func TestStdcombined(t *testing.T) {
res1, err := Runner("python3").
Arg("-c").
Arg("import sys; import time; print(\"1\", file=sys.stderr, flush=True); time.sleep(0.1); print(\"2\", file=sys.stdout, flush=True); time.sleep(0.1); print(\"3\", file=sys.stderr, flush=True)").
Run()
if err != nil {
t.Errorf("%v", err)
}
if res1.CommandTimedOut {
t.Errorf("Timeout")
}
if res1.ExitCode != 0 {
t.Errorf("res1.ExitCode == %v", res1.ExitCode)
}
if res1.StdErr != "1\n3\n" {
t.Errorf("res1.StdErr == '%v'", res1.StdErr)
}
if res1.StdOut != "2\n" {
t.Errorf("res1.StdOut == '%v'", res1.StdOut)
}
if res1.StdCombined != "1\n2\n3\n" {
t.Errorf("res1.StdCombined == '%v'", res1.StdCombined)
}
}
func TestPartialRead(t *testing.T) {
res1, err := Runner("python3").
Arg("-c").
Arg("import sys; import time; print(\"first message\", flush=True); time.sleep(5); print(\"cant see me\", flush=True);").
Timeout(100 * time.Millisecond).
Run()
if err != nil {
t.Errorf("%v", err)
}
if !res1.CommandTimedOut {
t.Errorf("!CommandTimedOut")
}
if res1.StdErr != "" {
t.Errorf("res1.StdErr == '%v'", res1.StdErr)
}
if res1.StdOut != "first message\n" {
t.Errorf("res1.StdOut == '%v'", res1.StdOut)
}
if res1.StdCombined != "first message\n" {
t.Errorf("res1.StdCombined == '%v'", res1.StdCombined)
}
}
func TestPartialReadStderr(t *testing.T) {
res1, err := Runner("python3").
Arg("-c").
Arg("import sys; import time; print(\"first message\", file=sys.stderr, flush=True); time.sleep(5); print(\"cant see me\", file=sys.stderr, flush=True);").
Timeout(100 * time.Millisecond).
Run()
if err != nil {
t.Errorf("%v", err)
}
if !res1.CommandTimedOut {
t.Errorf("!CommandTimedOut")
}
if res1.StdErr != "first message\n" {
t.Errorf("res1.StdErr == '%v'", res1.StdErr)
}
if res1.StdOut != "" {
t.Errorf("res1.StdOut == '%v'", res1.StdOut)
}
if res1.StdCombined != "first message\n" {
t.Errorf("res1.StdCombined == '%v'", res1.StdCombined)
}
}
func TestReadUnflushedStdout(t *testing.T) {
res1, err := Runner("python3").Arg("-c").Arg("import sys; print(\"message101\", file=sys.stdout, end='')").Run()
if err != nil {
t.Errorf("%v", err)
}
if res1.CommandTimedOut {
t.Errorf("Timeout")
}
if res1.ExitCode != 0 {
t.Errorf("res1.ExitCode == %v", res1.ExitCode)
}
if res1.StdErr != "" {
t.Errorf("res1.StdErr == '%v'", res1.StdErr)
}
if res1.StdOut != "message101" {
t.Errorf("res1.StdOut == '%v'", res1.StdOut)
}
if res1.StdCombined != "message101\n" {
t.Errorf("res1.StdCombined == '%v'", res1.StdCombined)
}
}
func TestReadUnflushedStderr(t *testing.T) {
res1, err := Runner("python3").Arg("-c").Arg("import sys; print(\"message101\", file=sys.stderr, end='')").Run()
if err != nil {
t.Errorf("%v", err)
}
if res1.CommandTimedOut {
t.Errorf("Timeout")
}
if res1.ExitCode != 0 {
t.Errorf("res1.ExitCode == %v", res1.ExitCode)
}
if res1.StdErr != "message101" {
t.Errorf("res1.StdErr == '%v'", res1.StdErr)
}
if res1.StdOut != "" {
t.Errorf("res1.StdOut == '%v'", res1.StdOut)
}
if res1.StdCombined != "message101\n" {
t.Errorf("res1.StdCombined == '%v'", res1.StdCombined)
}
}
func TestPartialReadUnflushed(t *testing.T) {
t.SkipNow()
res1, err := Runner("python3").
Arg("-c").
Arg("import sys; import time; print(\"first message\", end=''); time.sleep(5); print(\"cant see me\", end='');").
Timeout(100 * time.Millisecond).
Run()
if err != nil {
t.Errorf("%v", err)
}
if !res1.CommandTimedOut {
t.Errorf("!CommandTimedOut")
}
if res1.StdErr != "" {
t.Errorf("res1.StdErr == '%v'", res1.StdErr)
}
if res1.StdOut != "first message" {
t.Errorf("res1.StdOut == '%v'", res1.StdOut)
}
if res1.StdCombined != "first message" {
t.Errorf("res1.StdCombined == '%v'", res1.StdCombined)
}
}
func TestPartialReadUnflushedStderr(t *testing.T) {
t.SkipNow()
res1, err := Runner("python3").
Arg("-c").
Arg("import sys; import time; print(\"first message\", file=sys.stderr, end=''); time.sleep(5); print(\"cant see me\", file=sys.stderr, end='');").
Timeout(100 * time.Millisecond).
Run()
if err != nil {
t.Errorf("%v", err)
}
if !res1.CommandTimedOut {
t.Errorf("!CommandTimedOut")
}
if res1.StdErr != "first message" {
t.Errorf("res1.StdErr == '%v'", res1.StdErr)
}
if res1.StdOut != "" {
t.Errorf("res1.StdOut == '%v'", res1.StdOut)
}
if res1.StdCombined != "first message" {
t.Errorf("res1.StdCombined == '%v'", res1.StdCombined)
}
}
func TestListener(t *testing.T) {
res1, err := Runner("python3").
Arg("-c").
Arg("import sys;" +
"import time;" +
"print(\"message 1\", flush=True);" +
"time.sleep(1);" +
"print(\"message 2\", flush=True);" +
"time.sleep(1);" +
"print(\"message 3\", flush=True);" +
"time.sleep(1);" +
"print(\"message 4\", file=sys.stderr, flush=True);" +
"time.sleep(1);" +
"print(\"message 5\", flush=True);" +
"time.sleep(1);" +
"print(\"final\");").
ListenStdout(func(s string) { fmt.Printf("@@STDOUT <<- %v (%v)\n", s, time.Now().Format(time.RFC3339Nano)) }).
ListenStderr(func(s string) { fmt.Printf("@@STDERR <<- %v (%v)\n", s, time.Now().Format(time.RFC3339Nano)) }).
Timeout(10 * time.Second).
Run()
if err != nil {
panic(err)
}
if res1.CommandTimedOut {
t.Errorf("Timeout")
}
if res1.ExitCode != 0 {
t.Errorf("res1.ExitCode == %v", res1.ExitCode)
}
}
func TestLongStdout(t *testing.T) {
res1, err := Runner("python3").
Arg("-c").
Arg("import sys; import time; print(\"X\" * 125001 + \"\\n\"); print(\"Y\" * 125001 + \"\\n\"); print(\"Z\" * 125001 + \"\\n\");").
Timeout(5000 * time.Millisecond).
Run()
if err != nil {
t.Errorf("%v", err)
}
if res1.CommandTimedOut {
t.Errorf("Timeout")
}
if res1.ExitCode != 0 {
t.Errorf("res1.ExitCode == %v", res1.ExitCode)
}
if res1.StdErr != "" {
t.Errorf("res1.StdErr == '%v'", res1.StdErr)
}
if len(res1.StdOut) != 375009 {
t.Errorf("len(res1.StdOut) == '%v'", len(res1.StdOut))
}
}
func TestFailOnTimeout(t *testing.T) {
_, err := Runner("sleep").Arg("2").Timeout(200 * time.Millisecond).FailOnTimeout().Run()
if !errors.Is(err, ErrTimeout) {
t.Errorf("wrong err := %v", err)
}
}
func TestFailOnStderr(t *testing.T) {
res1, err := Runner("python3").Arg("-c").Arg("import sys; print(\"error\", file=sys.stderr, end='')").FailOnStderr().Run()
if err == nil {
t.Errorf("no err")
}
if res1.CommandTimedOut {
t.Errorf("Timeout")
}
if res1.ExitCode != -1 {
t.Errorf("res1.ExitCode == %v", res1.ExitCode)
}
if res1.StdErr != "error" {
t.Errorf("res1.StdErr == '%v'", res1.StdErr)
}
if res1.StdOut != "" {
t.Errorf("res1.StdOut == '%v'", res1.StdOut)
}
if res1.StdCombined != "error\n" {
t.Errorf("res1.StdCombined == '%v'", res1.StdCombined)
}
}
func TestFailOnExitcode(t *testing.T) {
_, err := Runner("false").Timeout(200 * time.Millisecond).FailOnExitCode().Run()
if !errors.Is(err, ErrExitCode) {
t.Errorf("wrong err := %v", err)
}
}
func TestEnsureExitcode1(t *testing.T) {
_, err := Runner("false").Timeout(200 * time.Millisecond).EnsureExitcode(1).Run()
if err != nil {
t.Errorf("wrong err := %v", err)
}
}
func TestEnsureExitcode2(t *testing.T) {
_, err := Runner("false").Timeout(200*time.Millisecond).EnsureExitcode(0, 2, 3).Run()
if err != ErrExitCode {
t.Errorf("wrong err := %v", err)
}
}

12
cmdext/helper.go Normal file
View File

@@ -0,0 +1,12 @@
package cmdext
import "time"
func RunCommand(program string, args []string, timeout *time.Duration) (CommandResult, error) {
b := Runner(program)
b = b.Args(args)
if timeout != nil {
b = b.Timeout(*timeout)
}
return b.Run()
}

57
cmdext/listener.go Normal file
View File

@@ -0,0 +1,57 @@
package cmdext
type CommandListener interface {
ReadRawStdout([]byte)
ReadRawStderr([]byte)
ReadStdoutLine(string)
ReadStderrLine(string)
Finished(int)
Timeout()
}
type genericCommandListener struct {
_readRawStdout *func([]byte)
_readRawStderr *func([]byte)
_readStdoutLine *func(string)
_readStderrLine *func(string)
_finished *func(int)
_timeout *func()
}
func (g genericCommandListener) ReadRawStdout(v []byte) {
if g._readRawStdout != nil {
(*g._readRawStdout)(v)
}
}
func (g genericCommandListener) ReadRawStderr(v []byte) {
if g._readRawStderr != nil {
(*g._readRawStderr)(v)
}
}
func (g genericCommandListener) ReadStdoutLine(v string) {
if g._readStdoutLine != nil {
(*g._readStdoutLine)(v)
}
}
func (g genericCommandListener) ReadStderrLine(v string) {
if g._readStderrLine != nil {
(*g._readStderrLine)(v)
}
}
func (g genericCommandListener) Finished(v int) {
if g._finished != nil {
(*g._finished)(v)
}
}
func (g genericCommandListener) Timeout() {
if g._timeout != nil {
(*g._timeout)()
}
}

158
cmdext/pipereader.go Normal file
View File

@@ -0,0 +1,158 @@
package cmdext
import (
"bufio"
"git.blackforestbytes.com/BlackForestBytes/goext/syncext"
"io"
"sync"
)
type pipeReader struct {
lineBufferSize *int
stdout io.ReadCloser
stderr io.ReadCloser
}
// Read ready stdout and stdin until finished
// also splits both pipes into lines and calld the listener
func (pr *pipeReader) Read(listener []CommandListener) (string, string, string, error) {
type combevt struct {
line string
stop bool
}
errch := make(chan error, 8)
wg := sync.WaitGroup{}
// [1] read raw stdout
wg.Add(1)
stdoutBufferReader, stdoutBufferWriter := io.Pipe()
stdout := ""
go func() {
buf := make([]byte, 128)
for {
n, err := pr.stdout.Read(buf)
if n > 0 {
txt := string(buf[:n])
stdout += txt
_, _ = stdoutBufferWriter.Write(buf[:n])
for _, lstr := range listener {
lstr.ReadRawStdout(buf[:n])
}
}
if err == io.EOF {
break
}
if err != nil {
errch <- err
break
}
}
_ = stdoutBufferWriter.Close()
wg.Done()
}()
// [2] read raw stderr
wg.Add(1)
stderrBufferReader, stderrBufferWriter := io.Pipe()
stderr := ""
go func() {
buf := make([]byte, 128)
for {
n, err := pr.stderr.Read(buf)
if n > 0 {
txt := string(buf[:n])
stderr += txt
_, _ = stderrBufferWriter.Write(buf[:n])
for _, lstr := range listener {
lstr.ReadRawStderr(buf[:n])
}
}
if err == io.EOF {
break
}
if err != nil {
errch <- err
break
}
}
_ = stderrBufferWriter.Close()
wg.Done()
}()
combch := make(chan combevt, 32)
// [3] collect stdout line-by-line
wg.Add(1)
go func() {
scanner := bufio.NewScanner(stdoutBufferReader)
if pr.lineBufferSize != nil {
scanner.Buffer([]byte{}, *pr.lineBufferSize)
}
for scanner.Scan() {
txt := scanner.Text()
for _, lstr := range listener {
lstr.ReadStdoutLine(txt)
}
combch <- combevt{txt, false}
}
if err := scanner.Err(); err != nil {
errch <- err
}
combch <- combevt{"", true}
wg.Done()
}()
// [4] collect stderr line-by-line
wg.Add(1)
go func() {
scanner := bufio.NewScanner(stderrBufferReader)
if pr.lineBufferSize != nil {
scanner.Buffer([]byte{}, *pr.lineBufferSize)
}
for scanner.Scan() {
txt := scanner.Text()
for _, lstr := range listener {
lstr.ReadStderrLine(txt)
}
combch <- combevt{txt, false}
}
if err := scanner.Err(); err != nil {
errch <- err
}
combch <- combevt{"", true}
wg.Done()
}()
// [5] combine stdcombined
wg.Add(1)
stdcombined := ""
go func() {
stopctr := 0
for stopctr < 2 {
vvv := <-combch
if vvv.stop {
stopctr++
} else {
stdcombined += vvv.line + "\n" // this comes from bufio.Scanner and has no newlines...
}
}
wg.Done()
}()
// wait for all (5) goroutines to finish
wg.Wait()
if err, ok := syncext.ReadNonBlocking(errch); ok {
return "", "", "", err
}
return stdout, stderr, stdcombined, nil
}

View File

@@ -3,20 +3,33 @@ package confext
import ( import (
"errors" "errors"
"fmt" "fmt"
"gogs.mikescher.com/BlackForestBytes/goext/timeext" "git.blackforestbytes.com/BlackForestBytes/goext/timeext"
"math/bits" "math/bits"
"os" "os"
"reflect" "reflect"
"strconv" "strconv"
"strings"
"time" "time"
) )
// ApplyEnvOverrides overrides field values from environment variables // ApplyEnvOverrides overrides field values from environment variables
// //
// fields must be tagged with `env:"env_key"` // fields must be tagged with `env:"env_key"`
func ApplyEnvOverrides[T any](c *T) error { //
// only works on exported fields
//
// fields without an env tag are ignored
// fields with an `env:"-"` tag are ignore
//
// sub-structs are recursively parsed (if they have an env tag) and the env-variable keys are delimited by the delim parameter
// sub-structs with `env:""` are also parsed, but the delimited is skipped (they are handled as if they were one level higher)
func ApplyEnvOverrides[T any](prefix string, c *T, delim string) error {
rval := reflect.ValueOf(c).Elem() rval := reflect.ValueOf(c).Elem()
return processEnvOverrides(rval, delim, prefix)
}
func processEnvOverrides(rval reflect.Value, delim string, prefix string) error {
rtyp := rval.Type() rtyp := rval.Type()
for i := 0; i < rtyp.NumField(); i++ { for i := 0; i < rtyp.NumField(); i++ {
@@ -24,113 +37,159 @@ func ApplyEnvOverrides[T any](c *T) error {
rsfield := rtyp.Field(i) rsfield := rtyp.Field(i)
rvfield := rval.Field(i) rvfield := rval.Field(i)
envkey := rsfield.Tag.Get("env") if !rsfield.IsExported() {
if envkey == "" {
continue continue
} }
envval, efound := os.LookupEnv(envkey) envkey, found := rsfield.Tag.Lookup("env")
if !found || envkey == "-" {
continue
}
if rvfield.Kind() == reflect.Struct && rvfield.Type() != reflect.TypeOf(time.UnixMilli(0)) {
subPrefix := prefix
if envkey != "" {
subPrefix = subPrefix + envkey + delim
}
err := processEnvOverrides(rvfield, delim, subPrefix)
if err != nil {
return err
}
continue
}
fullEnvKey := prefix + envkey
envval, efound := os.LookupEnv(fullEnvKey)
if !efound { if !efound {
continue continue
} }
if rvfield.Type() == reflect.TypeOf("") { if rvfield.Type().Kind() == reflect.Pointer {
rvfield.Set(reflect.ValueOf(envval)) newval, err := parseEnvToValue(envval, fullEnvKey, rvfield.Type().Elem())
fmt.Printf("[CONF] Overwrite config '%s' with '%s'\n", envkey, envval)
} else if rvfield.Type() == reflect.TypeOf(int(0)) {
envint, err := strconv.ParseInt(envval, 10, bits.UintSize)
if err != nil { if err != nil {
return errors.New(fmt.Sprintf("Failed to parse env-config variable '%s' to int (value := '%s')", envkey, envval)) return err
} }
rvfield.Set(reflect.ValueOf(int(envint))) // converts reflect.Value to pointer
ptrval := reflect.New(rvfield.Type().Elem())
ptrval.Elem().Set(newval)
fmt.Printf("[CONF] Overwrite config '%s' with '%s'\n", envkey, envval) rvfield.Set(ptrval)
} else if rvfield.Type() == reflect.TypeOf(int64(0)) { fmt.Printf("[CONF] Overwrite config '%s' with '%s'\n", fullEnvKey, envval)
envint, err := strconv.ParseInt(envval, 10, 64)
if err != nil {
return errors.New(fmt.Sprintf("Failed to parse env-config variable '%s' to int64 (value := '%s')", envkey, envval))
}
rvfield.Set(reflect.ValueOf(int64(envint)))
fmt.Printf("[CONF] Overwrite config '%s' with '%s'\n", envkey, envval)
} else if rvfield.Type() == reflect.TypeOf(int32(0)) {
envint, err := strconv.ParseInt(envval, 10, 32)
if err != nil {
return errors.New(fmt.Sprintf("Failed to parse env-config variable '%s' to int32 (value := '%s')", envkey, envval))
}
rvfield.Set(reflect.ValueOf(int32(envint)))
fmt.Printf("[CONF] Overwrite config '%s' with '%s'\n", envkey, envval)
} else if rvfield.Type() == reflect.TypeOf(int8(0)) {
envint, err := strconv.ParseInt(envval, 10, 8)
if err != nil {
return errors.New(fmt.Sprintf("Failed to parse env-config variable '%s' to int32 (value := '%s')", envkey, envval))
}
rvfield.Set(reflect.ValueOf(int8(envint)))
fmt.Printf("[CONF] Overwrite config '%s' with '%s'\n", envkey, envval)
} else if rvfield.Type() == reflect.TypeOf(time.Duration(0)) {
dur, err := timeext.ParseDurationShortString(envval)
if err != nil {
return errors.New(fmt.Sprintf("Failed to parse env-config variable '%s' to duration (value := '%s')", envkey, envval))
}
rvfield.Set(reflect.ValueOf(dur))
fmt.Printf("[CONF] Overwrite config '%s' with '%s'\n", envkey, dur.String())
} else if rvfield.Type() == reflect.TypeOf(time.UnixMilli(0)) {
tim, err := time.Parse(time.RFC3339Nano, envval)
if err != nil {
return errors.New(fmt.Sprintf("Failed to parse env-config variable '%s' to time.time (value := '%s')", envkey, envval))
}
rvfield.Set(reflect.ValueOf(tim))
fmt.Printf("[CONF] Overwrite config '%s' with '%s'\n", envkey, tim.String())
} else if rvfield.Type().ConvertibleTo(reflect.TypeOf(int(0))) {
envint, err := strconv.ParseInt(envval, 10, 8)
if err != nil {
return errors.New(fmt.Sprintf("Failed to parse env-config variable '%s' to <%s, ,int> (value := '%s')", rvfield.Type().Name(), envkey, envval))
}
envcvl := reflect.ValueOf(envint).Convert(rvfield.Type())
rvfield.Set(envcvl)
fmt.Printf("[CONF] Overwrite config '%s' with '%v'\n", envkey, envcvl.Interface())
} else if rvfield.Type().ConvertibleTo(reflect.TypeOf("")) {
envcvl := reflect.ValueOf(envval).Convert(rvfield.Type())
rvfield.Set(envcvl)
fmt.Printf("[CONF] Overwrite config '%s' with '%v'\n", envkey, envcvl.Interface())
} else { } else {
return errors.New(fmt.Sprintf("Unknown kind/type in config: [ %s | %s ]", rvfield.Kind().String(), rvfield.Type().String()))
newval, err := parseEnvToValue(envval, fullEnvKey, rvfield.Type())
if err != nil {
return err
}
rvfield.Set(newval)
fmt.Printf("[CONF] Overwrite config '%s' with '%s'\n", fullEnvKey, envval)
} }
} }
return nil return nil
} }
func parseEnvToValue(envval string, fullEnvKey string, rvtype reflect.Type) (reflect.Value, error) {
if rvtype == reflect.TypeOf("") {
return reflect.ValueOf(envval), nil
} else if rvtype == reflect.TypeOf(int(0)) {
envint, err := strconv.ParseInt(envval, 10, bits.UintSize)
if err != nil {
return reflect.Value{}, errors.New(fmt.Sprintf("Failed to parse env-config variable '%s' to int (value := '%s')", fullEnvKey, envval))
}
return reflect.ValueOf(int(envint)), nil
} else if rvtype == reflect.TypeOf(int64(0)) {
envint, err := strconv.ParseInt(envval, 10, 64)
if err != nil {
return reflect.Value{}, errors.New(fmt.Sprintf("Failed to parse env-config variable '%s' to int64 (value := '%s')", fullEnvKey, envval))
}
return reflect.ValueOf(int64(envint)), nil
} else if rvtype == reflect.TypeOf(int32(0)) {
envint, err := strconv.ParseInt(envval, 10, 32)
if err != nil {
return reflect.Value{}, errors.New(fmt.Sprintf("Failed to parse env-config variable '%s' to int32 (value := '%s')", fullEnvKey, envval))
}
return reflect.ValueOf(int32(envint)), nil
} else if rvtype == reflect.TypeOf(int8(0)) {
envint, err := strconv.ParseInt(envval, 10, 8)
if err != nil {
return reflect.Value{}, errors.New(fmt.Sprintf("Failed to parse env-config variable '%s' to int32 (value := '%s')", fullEnvKey, envval))
}
return reflect.ValueOf(int8(envint)), nil
} else if rvtype == reflect.TypeOf(time.Duration(0)) {
dur, err := timeext.ParseDurationShortString(envval)
if err != nil {
return reflect.Value{}, errors.New(fmt.Sprintf("Failed to parse env-config variable '%s' to duration (value := '%s')", fullEnvKey, envval))
}
return reflect.ValueOf(dur), nil
} else if rvtype == reflect.TypeOf(time.UnixMilli(0)) {
tim, err := time.Parse(time.RFC3339Nano, envval)
if err != nil {
return reflect.Value{}, errors.New(fmt.Sprintf("Failed to parse env-config variable '%s' to time.time (value := '%s')", fullEnvKey, envval))
}
return reflect.ValueOf(tim), nil
} else if rvtype.ConvertibleTo(reflect.TypeOf(int(0))) {
envint, err := strconv.ParseInt(envval, 10, 8)
if err != nil {
return reflect.Value{}, errors.New(fmt.Sprintf("Failed to parse env-config variable '%s' to <%s, ,int> (value := '%s')", rvtype.Name(), fullEnvKey, envval))
}
envcvl := reflect.ValueOf(envint).Convert(rvtype)
return envcvl, nil
} else if rvtype.ConvertibleTo(reflect.TypeOf(false)) {
if strings.TrimSpace(strings.ToLower(envval)) == "true" {
return reflect.ValueOf(true).Convert(rvtype), nil
} else if strings.TrimSpace(strings.ToLower(envval)) == "false" {
return reflect.ValueOf(false).Convert(rvtype), nil
} else if strings.TrimSpace(strings.ToLower(envval)) == "1" {
return reflect.ValueOf(true).Convert(rvtype), nil
} else if strings.TrimSpace(strings.ToLower(envval)) == "0" {
return reflect.ValueOf(false).Convert(rvtype), nil
} else {
return reflect.Value{}, errors.New(fmt.Sprintf("Failed to parse env-config variable '%s' to <%s, ,bool> (value := '%s')", rvtype.Name(), fullEnvKey, envval))
}
} else if rvtype.ConvertibleTo(reflect.TypeOf("")) {
envcvl := reflect.ValueOf(envval).Convert(rvtype)
return envcvl, nil
} else {
return reflect.Value{}, errors.New(fmt.Sprintf("Unknown kind/type in config: [ %s | %s ]", rvtype.Kind().String(), rvtype.String()))
}
}

View File

@@ -1,6 +1,8 @@
package confext package confext
import ( import (
"git.blackforestbytes.com/BlackForestBytes/goext/timeext"
"git.blackforestbytes.com/BlackForestBytes/goext/tst"
"testing" "testing"
"time" "time"
) )
@@ -40,13 +42,13 @@ func TestApplyEnvOverridesNoop(t *testing.T) {
output := input output := input
err := ApplyEnvOverrides(&output) err := ApplyEnvOverrides("", &output, ".")
if err != nil { if err != nil {
t.Errorf("%v", err) t.Errorf("%v", err)
t.FailNow() t.FailNow()
} }
assertEqual(t, input, output) tst.AssertEqual(t, input, output)
} }
func TestApplyEnvOverridesSimple(t *testing.T) { func TestApplyEnvOverridesSimple(t *testing.T) {
@@ -66,6 +68,7 @@ func TestApplyEnvOverridesSimple(t *testing.T) {
V7 aliasstring `env:"TEST_V7"` V7 aliasstring `env:"TEST_V7"`
V8 time.Duration `env:"TEST_V8"` V8 time.Duration `env:"TEST_V8"`
V9 time.Time `env:"TEST_V9"` V9 time.Time `env:"TEST_V9"`
VA bool `env:"TEST_VA"`
} }
data := testdata{ data := testdata{
@@ -80,6 +83,7 @@ func TestApplyEnvOverridesSimple(t *testing.T) {
V7: "7", V7: "7",
V8: 9, V8: 9,
V9: time.Unix(1671102873, 0), V9: time.Unix(1671102873, 0),
VA: false,
} }
t.Setenv("TEST_V1", "846") t.Setenv("TEST_V1", "846")
@@ -91,22 +95,175 @@ func TestApplyEnvOverridesSimple(t *testing.T) {
t.Setenv("TEST_V7", "AAAAAA") t.Setenv("TEST_V7", "AAAAAA")
t.Setenv("TEST_V8", "1min4s") t.Setenv("TEST_V8", "1min4s")
t.Setenv("TEST_V9", "2009-11-10T23:00:00Z") t.Setenv("TEST_V9", "2009-11-10T23:00:00Z")
t.Setenv("TEST_VA", "true")
err := ApplyEnvOverrides(&data) err := ApplyEnvOverrides("", &data, ".")
if err != nil { if err != nil {
t.Errorf("%v", err) t.Errorf("%v", err)
t.FailNow() t.FailNow()
} }
assertEqual(t, data.V1, 846) tst.AssertEqual(t, data.V1, 846)
assertEqual(t, data.V2, "hello_world") tst.AssertEqual(t, data.V2, "hello_world")
assertEqual(t, data.V3, 6) tst.AssertEqual(t, data.V3, 6)
assertEqual(t, data.V4, 333) tst.AssertEqual(t, data.V4, 333)
assertEqual(t, data.V5, -937) tst.AssertEqual(t, data.V5, -937)
assertEqual(t, data.V6, 70) tst.AssertEqual(t, data.V6, 70)
assertEqual(t, data.V7, "AAAAAA") tst.AssertEqual(t, data.V7, "AAAAAA")
assertEqual(t, data.V8, time.Second*64) tst.AssertEqual(t, data.V8, time.Second*64)
assertEqual(t, data.V9, time.Unix(1257894000, 0).UTC()) tst.AssertEqual(t, data.V9, time.Unix(1257894000, 0).UTC())
tst.AssertEqual(t, data.VA, true)
}
func TestApplyEnvOverridesRecursive(t *testing.T) {
type subdata struct {
V1 int `env:"SUB_V1"`
VX string ``
V2 string `env:"SUB_V2"`
V8 time.Duration `env:"SUB_V3"`
V9 time.Time `env:"SUB_V4"`
}
type testdata struct {
V1 int `env:"TEST_V1"`
VX string ``
Sub1 subdata ``
Sub2 subdata `env:"TEST_V2"`
Sub3 subdata `env:"TEST_V3"`
Sub4 subdata `env:""`
V5 string `env:"-"`
}
data := testdata{
V1: 1,
VX: "2",
V5: "no",
Sub1: subdata{
V1: 3,
VX: "4",
V2: "5",
V8: 6 * time.Second,
V9: time.Date(2000, 1, 7, 1, 1, 1, 0, time.UTC),
},
Sub2: subdata{
V1: 8,
VX: "9",
V2: "10",
V8: 11 * time.Second,
V9: time.Date(2000, 1, 12, 1, 1, 1, 0, timeext.TimezoneBerlin),
},
Sub3: subdata{
V1: 13,
VX: "14",
V2: "15",
V8: 16 * time.Second,
V9: time.Date(2000, 1, 17, 1, 1, 1, 0, timeext.TimezoneBerlin),
},
Sub4: subdata{
V1: 18,
VX: "19",
V2: "20",
V8: 21 * time.Second,
V9: time.Date(2000, 1, 22, 1, 1, 1, 0, timeext.TimezoneBerlin),
},
}
t.Setenv("TEST_V1", "999")
t.Setenv("-", "yes")
t.Setenv("TEST_V2_SUB_V1", "846")
t.Setenv("TEST_V2_SUB_V2", "222_hello_world")
t.Setenv("TEST_V2_SUB_V3", "1min4s")
t.Setenv("TEST_V2_SUB_V4", "2009-11-10T23:00:00Z")
t.Setenv("TEST_V3_SUB_V1", "33846")
t.Setenv("TEST_V3_SUB_V2", "33_hello_world")
t.Setenv("TEST_V3_SUB_V3", "33min4s")
t.Setenv("TEST_V3_SUB_V4", "2033-11-10T23:00:00Z")
t.Setenv("SUB_V1", "11")
t.Setenv("SUB_V2", "22")
t.Setenv("SUB_V3", "33min")
t.Setenv("SUB_V4", "2044-01-01T00:00:00Z")
err := ApplyEnvOverrides("", &data, "_")
if err != nil {
t.Errorf("%v", err)
t.FailNow()
}
tst.AssertEqual(t, data.V1, 999)
tst.AssertEqual(t, data.VX, "2")
tst.AssertEqual(t, data.V5, "no")
tst.AssertEqual(t, data.Sub1.V1, 3)
tst.AssertEqual(t, data.Sub1.VX, "4")
tst.AssertEqual(t, data.Sub1.V2, "5")
tst.AssertEqual(t, data.Sub1.V8, time.Second*6)
tst.AssertEqual(t, data.Sub1.V9, time.Unix(947206861, 0).UTC())
tst.AssertEqual(t, data.Sub2.V1, 846)
tst.AssertEqual(t, data.Sub2.VX, "9")
tst.AssertEqual(t, data.Sub2.V2, "222_hello_world")
tst.AssertEqual(t, data.Sub2.V8, time.Second*64)
tst.AssertEqual(t, data.Sub2.V9, time.Unix(1257894000, 0).UTC())
tst.AssertEqual(t, data.Sub3.V1, 33846)
tst.AssertEqual(t, data.Sub3.VX, "14")
tst.AssertEqual(t, data.Sub3.V2, "33_hello_world")
tst.AssertEqual(t, data.Sub3.V8, time.Second*1984)
tst.AssertEqual(t, data.Sub3.V9, time.Unix(2015276400, 0).UTC())
tst.AssertEqual(t, data.Sub4.V1, 11)
tst.AssertEqual(t, data.Sub4.VX, "19")
tst.AssertEqual(t, data.Sub4.V2, "22")
tst.AssertEqual(t, data.Sub4.V8, time.Second*1980)
tst.AssertEqual(t, data.Sub4.V9, time.Unix(2335219200, 0).UTC())
}
func TestApplyEnvOverridesPointer(t *testing.T) {
type aliasint int
type aliasstring string
type testdata struct {
V1 *int `env:"TEST_V1"`
VX *string ``
V2 *string `env:"TEST_V2"`
V3 *int8 `env:"TEST_V3"`
V4 *int32 `env:"TEST_V4"`
V5 *int64 `env:"TEST_V5"`
V6 *aliasint `env:"TEST_V6"`
VY *aliasint ``
V7 *aliasstring `env:"TEST_V7"`
V8 *time.Duration `env:"TEST_V8"`
V9 *time.Time `env:"TEST_V9"`
}
data := testdata{}
t.Setenv("TEST_V1", "846")
t.Setenv("TEST_V2", "hello_world")
t.Setenv("TEST_V3", "6")
t.Setenv("TEST_V4", "333")
t.Setenv("TEST_V5", "-937")
t.Setenv("TEST_V6", "070")
t.Setenv("TEST_V7", "AAAAAA")
t.Setenv("TEST_V8", "1min4s")
t.Setenv("TEST_V9", "2009-11-10T23:00:00Z")
err := ApplyEnvOverrides("", &data, ".")
if err != nil {
t.Errorf("%v", err)
t.FailNow()
}
tst.AssertDeRefEqual(t, data.V1, 846)
tst.AssertDeRefEqual(t, data.V2, "hello_world")
tst.AssertDeRefEqual(t, data.V3, 6)
tst.AssertDeRefEqual(t, data.V4, 333)
tst.AssertDeRefEqual(t, data.V5, -937)
tst.AssertDeRefEqual(t, data.V6, 70)
tst.AssertDeRefEqual(t, data.V7, "AAAAAA")
tst.AssertDeRefEqual(t, data.V8, time.Second*64)
tst.AssertDeRefEqual(t, data.V9, time.Unix(1257894000, 0).UTC())
} }
func assertEqual[T comparable](t *testing.T, actual T, expected T) { func assertEqual[T comparable](t *testing.T, actual T, expected T) {
@@ -114,3 +271,12 @@ func assertEqual[T comparable](t *testing.T, actual T, expected T) {
t.Errorf("values differ: Actual: '%v', Expected: '%v'", actual, expected) t.Errorf("values differ: Actual: '%v', Expected: '%v'", actual, expected)
} }
} }
func assertPtrEqual[T comparable](t *testing.T, actual *T, expected T) {
if actual == nil {
t.Errorf("values differ: Actual: NIL, Expected: '%v'", expected)
}
if *actual != expected {
t.Errorf("values differ: Actual: '%v', Expected: '%v'", actual, expected)
}
}

132
cryptext/aes.go Normal file
View File

@@ -0,0 +1,132 @@
package cryptext
import (
"bytes"
"crypto/aes"
"crypto/cipher"
"crypto/rand"
"crypto/sha256"
"encoding/base32"
"encoding/json"
"errors"
"golang.org/x/crypto/scrypt"
"io"
)
// https://stackoverflow.com/a/18819040/1761622
type aesPayload struct {
Salt []byte `json:"s"`
IV []byte `json:"i"`
Data []byte `json:"d"`
Rounds int `json:"r"`
Version uint `json:"v"`
}
func EncryptAESSimple(password []byte, data []byte, rounds int) (string, error) {
salt := make([]byte, 8)
_, err := io.ReadFull(rand.Reader, salt)
if err != nil {
return "", err
}
key, err := scrypt.Key(password, salt, rounds, 8, 1, 32)
if err != nil {
return "", err
}
block, err := aes.NewCipher(key)
if err != nil {
return "", err
}
h := sha256.New()
h.Write(data)
checksum := h.Sum(nil)
if len(checksum) != 32 {
return "", errors.New("wrong cs size")
}
ciphertext := make([]byte, 32+len(data))
iv := make([]byte, aes.BlockSize)
_, err = io.ReadFull(rand.Reader, iv)
if err != nil {
return "", err
}
combinedData := make([]byte, 0, 32+len(data))
combinedData = append(combinedData, checksum...)
combinedData = append(combinedData, data...)
cfb := cipher.NewCFBEncrypter(block, iv)
cfb.XORKeyStream(ciphertext, combinedData)
pl := aesPayload{
Salt: salt,
IV: iv,
Data: ciphertext,
Version: 1,
Rounds: rounds,
}
jbin, err := json.Marshal(pl)
if err != nil {
return "", err
}
res := base32.StdEncoding.WithPadding(base32.NoPadding).EncodeToString(jbin)
return res, nil
}
func DecryptAESSimple(password []byte, encText string) ([]byte, error) {
jbin, err := base32.StdEncoding.WithPadding(base32.NoPadding).DecodeString(encText)
if err != nil {
return nil, err
}
var pl aesPayload
err = json.Unmarshal(jbin, &pl)
if err != nil {
return nil, err
}
if pl.Version != 1 {
return nil, errors.New("unsupported version")
}
key, err := scrypt.Key(password, pl.Salt, pl.Rounds, 8, 1, 32) // this is not 100% correct, rounds too low and salt is missing
if err != nil {
return nil, err
}
block, err := aes.NewCipher(key)
if err != nil {
return nil, err
}
dest := make([]byte, len(pl.Data))
cfb := cipher.NewCFBDecrypter(block, pl.IV)
cfb.XORKeyStream(dest, pl.Data)
if len(dest) < 32 {
return nil, errors.New("payload too small")
}
chck := dest[:32]
data := dest[32:]
h := sha256.New()
h.Write(data)
chck2 := h.Sum(nil)
if !bytes.Equal(chck, chck2) {
return nil, errors.New("checksum mismatch")
}
return data, nil
}

36
cryptext/aes_test.go Normal file
View File

@@ -0,0 +1,36 @@
package cryptext
import (
"fmt"
"git.blackforestbytes.com/BlackForestBytes/goext/tst"
"testing"
)
func TestEncryptAESSimple(t *testing.T) {
pw := []byte("hunter12")
str1 := []byte("Hello World")
str2, err := EncryptAESSimple(pw, str1, 512)
if err != nil {
panic(err)
}
fmt.Printf("%s\n", str2)
str3, err := DecryptAESSimple(pw, str2)
if err != nil {
panic(err)
}
tst.AssertEqual(t, string(str1), string(str3))
str4, err := EncryptAESSimple(pw, str3, 512)
if err != nil {
panic(err)
}
tst.AssertNotEqual(t, string(str2), string(str4))
}

View File

@@ -1,25 +1,20 @@
package cryptext package cryptext
import ( import (
"git.blackforestbytes.com/BlackForestBytes/goext/tst"
"testing" "testing"
) )
func TestStrSha256(t *testing.T) { func TestStrSha256(t *testing.T) {
assertEqual(t, StrSha256(""), "e3b0c44298fc1c149afbf4c8996fb92427ae41e4649b934ca495991b7852b855") tst.AssertEqual(t, StrSha256(""), "e3b0c44298fc1c149afbf4c8996fb92427ae41e4649b934ca495991b7852b855")
assertEqual(t, StrSha256("0"), "5feceb66ffc86f38d952786c6d696c79c2dbc239dd4e91b46729d73a27fb57e9") tst.AssertEqual(t, StrSha256("0"), "5feceb66ffc86f38d952786c6d696c79c2dbc239dd4e91b46729d73a27fb57e9")
assertEqual(t, StrSha256("80085"), "b3786e141d65638ad8a98173e26b5f6a53c927737b23ff31fb1843937250f44b") tst.AssertEqual(t, StrSha256("80085"), "b3786e141d65638ad8a98173e26b5f6a53c927737b23ff31fb1843937250f44b")
assertEqual(t, StrSha256("Hello World"), "a591a6d40bf420404a011733cfb7b190d62c65bf0bcda32b57b277d9ad9f146e") tst.AssertEqual(t, StrSha256("Hello World"), "a591a6d40bf420404a011733cfb7b190d62c65bf0bcda32b57b277d9ad9f146e")
} }
func TestBytesSha256(t *testing.T) { func TestBytesSha256(t *testing.T) {
assertEqual(t, BytesSha256([]byte{}), "e3b0c44298fc1c149afbf4c8996fb92427ae41e4649b934ca495991b7852b855") tst.AssertEqual(t, BytesSha256([]byte{}), "e3b0c44298fc1c149afbf4c8996fb92427ae41e4649b934ca495991b7852b855")
assertEqual(t, BytesSha256([]byte{0}), "6e340b9cffb37a989ca544e6bb780a2c78901d3fb33738768511a30617afa01d") tst.AssertEqual(t, BytesSha256([]byte{0}), "6e340b9cffb37a989ca544e6bb780a2c78901d3fb33738768511a30617afa01d")
assertEqual(t, BytesSha256([]byte{128}), "76be8b528d0075f7aae98d6fa57a6d3c83ae480a8469e668d7b0af968995ac71") tst.AssertEqual(t, BytesSha256([]byte{128}), "76be8b528d0075f7aae98d6fa57a6d3c83ae480a8469e668d7b0af968995ac71")
assertEqual(t, BytesSha256([]byte{0, 1, 2, 4, 8, 16, 32, 64, 128, 255}), "55016a318ba538e00123c736b2a8b6db368d00e7e25727547655b653e5853603") tst.AssertEqual(t, BytesSha256([]byte{0, 1, 2, 4, 8, 16, 32, 64, 128, 255}), "55016a318ba538e00123c736b2a8b6db368d00e7e25727547655b653e5853603")
}
func assertEqual(t *testing.T, actual string, expected string) {
if actual != expected {
t.Errorf("values differ: Actual: '%v', Expected: '%v'", actual, expected)
}
} }

428
cryptext/passHash.go Normal file
View File

@@ -0,0 +1,428 @@
package cryptext
import (
"crypto/rand"
"crypto/sha256"
"crypto/sha512"
"encoding/base64"
"encoding/hex"
"errors"
"fmt"
"git.blackforestbytes.com/BlackForestBytes/goext/langext"
"git.blackforestbytes.com/BlackForestBytes/goext/totpext"
"golang.org/x/crypto/bcrypt"
"strconv"
"strings"
)
const LatestPassHashVersion = 5
// PassHash
// - [v0]: plaintext password ( `0|...` ) // simple, used to write PW's directly in DB
// - [v1]: sha256(plaintext) // simple hashing
// - [v2]: seed | sha256<seed>(plaintext) // add seed
// - [v3]: seed | sha256<seed>(plaintext) | [hex(totp)] // add TOTP support
// - [v4]: bcrypt(plaintext) | [hex(totp)] // use proper bcrypt
// - [v5]: bcrypt(sha512(plaintext)) | [hex(totp)] // hash pw before bcrypt (otherwise max pw-len = 72)
type PassHash string
func (ph PassHash) Valid() bool {
_, _, _, _, _, valid := ph.Data()
return valid
}
func (ph PassHash) HasTOTP() bool {
_, _, _, otp, _, _ := ph.Data()
return otp
}
func (ph PassHash) Data() (_version int, _seed []byte, _payload []byte, _totp bool, _totpsecret []byte, _valid bool) {
split := strings.Split(string(ph), "|")
if len(split) == 0 {
return -1, nil, nil, false, nil, false
}
version, err := strconv.ParseInt(split[0], 10, 32)
if err != nil {
return -1, nil, nil, false, nil, false
}
if version == 0 {
if len(split) != 2 {
return -1, nil, nil, false, nil, false
}
return int(version), nil, []byte(split[1]), false, nil, true
}
if version == 1 {
if len(split) != 2 {
return -1, nil, nil, false, nil, false
}
payload, err := base64.RawStdEncoding.DecodeString(split[1])
if err != nil {
return -1, nil, nil, false, nil, false
}
return int(version), nil, payload, false, nil, true
}
if version == 2 {
if len(split) != 3 {
return -1, nil, nil, false, nil, false
}
seed, err := base64.RawStdEncoding.DecodeString(split[1])
if err != nil {
return -1, nil, nil, false, nil, false
}
payload, err := base64.RawStdEncoding.DecodeString(split[2])
if err != nil {
return -1, nil, nil, false, nil, false
}
return int(version), seed, payload, false, nil, true
}
if version == 3 {
if len(split) != 4 {
return -1, nil, nil, false, nil, false
}
seed, err := base64.RawStdEncoding.DecodeString(split[1])
if err != nil {
return -1, nil, nil, false, nil, false
}
payload, err := base64.RawStdEncoding.DecodeString(split[2])
if err != nil {
return -1, nil, nil, false, nil, false
}
totp := false
totpsecret := make([]byte, 0)
if split[3] != "0" {
totpsecret, err = hex.DecodeString(split[3])
totp = true
}
return int(version), seed, payload, totp, totpsecret, true
}
if version == 4 {
if len(split) != 3 {
return -1, nil, nil, false, nil, false
}
payload := []byte(split[1])
totp := false
totpsecret := make([]byte, 0)
if split[2] != "0" {
totpsecret, err = hex.DecodeString(split[2])
totp = true
}
return int(version), nil, payload, totp, totpsecret, true
}
if version == 5 {
if len(split) != 3 {
return -1, nil, nil, false, nil, false
}
payload := []byte(split[1])
totp := false
totpsecret := make([]byte, 0)
if split[2] != "0" {
totpsecret, err = hex.DecodeString(split[2])
totp = true
}
return int(version), nil, payload, totp, totpsecret, true
}
return -1, nil, nil, false, nil, false
}
func (ph PassHash) Verify(plainpass string, totp *string) bool {
version, seed, payload, hastotp, totpsecret, valid := ph.Data()
if !valid {
return false
}
if hastotp && totp == nil {
return false
}
if version == 0 {
return langext.ArrEqualsExact([]byte(plainpass), payload)
}
if version == 1 {
return langext.ArrEqualsExact(hash256(plainpass), payload)
}
if version == 2 {
return langext.ArrEqualsExact(hash256Seeded(plainpass, seed), payload)
}
if version == 3 {
if !hastotp {
return langext.ArrEqualsExact(hash256Seeded(plainpass, seed), payload)
} else {
return langext.ArrEqualsExact(hash256Seeded(plainpass, seed), payload) && totpext.Validate(totpsecret, *totp)
}
}
if version == 4 {
if !hastotp {
return bcrypt.CompareHashAndPassword(payload, []byte(plainpass)) == nil
} else {
return bcrypt.CompareHashAndPassword(payload, []byte(plainpass)) == nil && totpext.Validate(totpsecret, *totp)
}
}
if version == 5 {
if !hastotp {
return bcrypt.CompareHashAndPassword(payload, hash512(plainpass)) == nil
} else {
return bcrypt.CompareHashAndPassword(payload, hash512(plainpass)) == nil && totpext.Validate(totpsecret, *totp)
}
}
return false
}
func (ph PassHash) NeedsPasswordUpgrade() bool {
version, _, _, _, _, valid := ph.Data()
return valid && version < LatestPassHashVersion
}
func (ph PassHash) Upgrade(plainpass string) (PassHash, error) {
version, _, _, hastotp, totpsecret, valid := ph.Data()
if !valid {
return "", errors.New("invalid password")
}
if version == LatestPassHashVersion {
return ph, nil
}
if hastotp {
return HashPassword(plainpass, totpsecret)
} else {
return HashPassword(plainpass, nil)
}
}
func (ph PassHash) ClearTOTP() (PassHash, error) {
version, _, _, _, _, valid := ph.Data()
if !valid {
return "", errors.New("invalid PassHash")
}
if version == 0 {
return ph, nil
}
if version == 1 {
return ph, nil
}
if version == 2 {
return ph, nil
}
if version == 3 {
split := strings.Split(string(ph), "|")
split[3] = "0"
return PassHash(strings.Join(split, "|")), nil
}
if version == 4 {
split := strings.Split(string(ph), "|")
split[2] = "0"
return PassHash(strings.Join(split, "|")), nil
}
if version == 5 {
split := strings.Split(string(ph), "|")
split[2] = "0"
return PassHash(strings.Join(split, "|")), nil
}
return "", errors.New("unknown version")
}
func (ph PassHash) WithTOTP(totpSecret []byte) (PassHash, error) {
version, _, _, _, _, valid := ph.Data()
if !valid {
return "", errors.New("invalid PassHash")
}
if version == 0 {
return "", errors.New("version does not support totp, needs upgrade")
}
if version == 1 {
return "", errors.New("version does not support totp, needs upgrade")
}
if version == 2 {
return "", errors.New("version does not support totp, needs upgrade")
}
if version == 3 {
split := strings.Split(string(ph), "|")
split[3] = hex.EncodeToString(totpSecret)
return PassHash(strings.Join(split, "|")), nil
}
if version == 4 {
split := strings.Split(string(ph), "|")
split[2] = hex.EncodeToString(totpSecret)
return PassHash(strings.Join(split, "|")), nil
}
if version == 5 {
split := strings.Split(string(ph), "|")
split[2] = hex.EncodeToString(totpSecret)
return PassHash(strings.Join(split, "|")), nil
}
return "", errors.New("unknown version")
}
func (ph PassHash) Change(newPlainPass string) (PassHash, error) {
version, _, _, hastotp, totpsecret, valid := ph.Data()
if !valid {
return "", errors.New("invalid PassHash")
}
if version == 0 {
return HashPasswordV0(newPlainPass)
}
if version == 1 {
return HashPasswordV1(newPlainPass)
}
if version == 2 {
return HashPasswordV2(newPlainPass)
}
if version == 3 {
return HashPasswordV3(newPlainPass, langext.Conditional(hastotp, totpsecret, nil))
}
if version == 4 {
return HashPasswordV4(newPlainPass, langext.Conditional(hastotp, totpsecret, nil))
}
if version == 5 {
return HashPasswordV5(newPlainPass, langext.Conditional(hastotp, totpsecret, nil))
}
return "", errors.New("unknown version")
}
func (ph PassHash) String() string {
return string(ph)
}
func HashPassword(plainpass string, totpSecret []byte) (PassHash, error) {
return HashPasswordV5(plainpass, totpSecret)
}
func HashPasswordV5(plainpass string, totpSecret []byte) (PassHash, error) {
var strtotp string
if totpSecret == nil {
strtotp = "0"
} else {
strtotp = hex.EncodeToString(totpSecret)
}
payload, err := bcrypt.GenerateFromPassword(hash512(plainpass), bcrypt.MinCost)
if err != nil {
return "", err
}
return PassHash(fmt.Sprintf("5|%s|%s", string(payload), strtotp)), nil
}
func HashPasswordV4(plainpass string, totpSecret []byte) (PassHash, error) {
var strtotp string
if totpSecret == nil {
strtotp = "0"
} else {
strtotp = hex.EncodeToString(totpSecret)
}
payload, err := bcrypt.GenerateFromPassword([]byte(plainpass), bcrypt.MinCost)
if err != nil {
return "", err
}
return PassHash(fmt.Sprintf("4|%s|%s", string(payload), strtotp)), nil
}
func HashPasswordV3(plainpass string, totpSecret []byte) (PassHash, error) {
var strtotp string
if totpSecret == nil {
strtotp = "0"
} else {
strtotp = hex.EncodeToString(totpSecret)
}
seed, err := newSeed()
if err != nil {
return "", err
}
checksum := hash256Seeded(plainpass, seed)
return PassHash(fmt.Sprintf("3|%s|%s|%s",
base64.RawStdEncoding.EncodeToString(seed),
base64.RawStdEncoding.EncodeToString(checksum),
strtotp)), nil
}
func HashPasswordV2(plainpass string) (PassHash, error) {
seed, err := newSeed()
if err != nil {
return "", err
}
checksum := hash256Seeded(plainpass, seed)
return PassHash(fmt.Sprintf("2|%s|%s", base64.RawStdEncoding.EncodeToString(seed), base64.RawStdEncoding.EncodeToString(checksum))), nil
}
func HashPasswordV1(plainpass string) (PassHash, error) {
return PassHash(fmt.Sprintf("1|%s", base64.RawStdEncoding.EncodeToString(hash256(plainpass)))), nil
}
func HashPasswordV0(plainpass string) (PassHash, error) {
return PassHash(fmt.Sprintf("0|%s", plainpass)), nil
}
func hash512(s string) []byte {
h := sha512.New()
h.Write([]byte(s))
bs := h.Sum(nil)
return bs
}
func hash256(s string) []byte {
h := sha256.New()
h.Write([]byte(s))
bs := h.Sum(nil)
return bs
}
func hash256Seeded(s string, seed []byte) []byte {
h := sha256.New()
h.Write(seed)
h.Write([]byte(s))
bs := h.Sum(nil)
return bs
}
func newSeed() ([]byte, error) {
secret := make([]byte, 32)
_, err := rand.Read(secret)
if err != nil {
return nil, err
}
return secret, nil
}

210
cryptext/passHash_test.go Normal file
View File

@@ -0,0 +1,210 @@
package cryptext
import (
"git.blackforestbytes.com/BlackForestBytes/goext/langext"
"git.blackforestbytes.com/BlackForestBytes/goext/totpext"
"git.blackforestbytes.com/BlackForestBytes/goext/tst"
"testing"
)
func TestPassHash1(t *testing.T) {
ph, err := HashPassword("test123", nil)
tst.AssertNoErr(t, err)
tst.AssertTrue(t, ph.Valid())
tst.AssertFalse(t, ph.HasTOTP())
tst.AssertFalse(t, ph.NeedsPasswordUpgrade())
tst.AssertTrue(t, ph.Verify("test123", nil))
tst.AssertFalse(t, ph.Verify("test124", nil))
}
func TestPassHashTOTP(t *testing.T) {
sec, err := totpext.GenerateSecret()
tst.AssertNoErr(t, err)
ph, err := HashPassword("test123", sec)
tst.AssertNoErr(t, err)
tst.AssertTrue(t, ph.Valid())
tst.AssertTrue(t, ph.HasTOTP())
tst.AssertFalse(t, ph.NeedsPasswordUpgrade())
tst.AssertFalse(t, ph.Verify("test123", nil))
tst.AssertFalse(t, ph.Verify("test124", nil))
tst.AssertTrue(t, ph.Verify("test123", langext.Ptr(totpext.TOTP(sec))))
tst.AssertFalse(t, ph.Verify("test124", nil))
}
func TestPassHashUpgrade_V0(t *testing.T) {
ph, err := HashPasswordV0("test123")
tst.AssertNoErr(t, err)
tst.AssertTrue(t, ph.Valid())
tst.AssertFalse(t, ph.HasTOTP())
tst.AssertTrue(t, ph.NeedsPasswordUpgrade())
tst.AssertTrue(t, ph.Verify("test123", nil))
tst.AssertFalse(t, ph.Verify("test124", nil))
ph, err = ph.Upgrade("test123")
tst.AssertNoErr(t, err)
tst.AssertTrue(t, ph.Valid())
tst.AssertFalse(t, ph.HasTOTP())
tst.AssertFalse(t, ph.NeedsPasswordUpgrade())
tst.AssertTrue(t, ph.Verify("test123", nil))
tst.AssertFalse(t, ph.Verify("test124", nil))
}
func TestPassHashUpgrade_V1(t *testing.T) {
ph, err := HashPasswordV1("test123")
tst.AssertNoErr(t, err)
tst.AssertTrue(t, ph.Valid())
tst.AssertFalse(t, ph.HasTOTP())
tst.AssertTrue(t, ph.NeedsPasswordUpgrade())
tst.AssertTrue(t, ph.Verify("test123", nil))
tst.AssertFalse(t, ph.Verify("test124", nil))
ph, err = ph.Upgrade("test123")
tst.AssertNoErr(t, err)
tst.AssertTrue(t, ph.Valid())
tst.AssertFalse(t, ph.HasTOTP())
tst.AssertFalse(t, ph.NeedsPasswordUpgrade())
tst.AssertTrue(t, ph.Verify("test123", nil))
tst.AssertFalse(t, ph.Verify("test124", nil))
}
func TestPassHashUpgrade_V2(t *testing.T) {
ph, err := HashPasswordV2("test123")
tst.AssertNoErr(t, err)
tst.AssertTrue(t, ph.Valid())
tst.AssertFalse(t, ph.HasTOTP())
tst.AssertTrue(t, ph.NeedsPasswordUpgrade())
tst.AssertTrue(t, ph.Verify("test123", nil))
tst.AssertFalse(t, ph.Verify("test124", nil))
ph, err = ph.Upgrade("test123")
tst.AssertNoErr(t, err)
tst.AssertTrue(t, ph.Valid())
tst.AssertFalse(t, ph.HasTOTP())
tst.AssertFalse(t, ph.NeedsPasswordUpgrade())
tst.AssertTrue(t, ph.Verify("test123", nil))
tst.AssertFalse(t, ph.Verify("test124", nil))
}
func TestPassHashUpgrade_V3(t *testing.T) {
ph, err := HashPasswordV3("test123", nil)
tst.AssertNoErr(t, err)
tst.AssertTrue(t, ph.Valid())
tst.AssertFalse(t, ph.HasTOTP())
tst.AssertTrue(t, ph.NeedsPasswordUpgrade())
tst.AssertTrue(t, ph.Verify("test123", nil))
tst.AssertFalse(t, ph.Verify("test124", nil))
ph, err = ph.Upgrade("test123")
tst.AssertNoErr(t, err)
tst.AssertTrue(t, ph.Valid())
tst.AssertFalse(t, ph.HasTOTP())
tst.AssertFalse(t, ph.NeedsPasswordUpgrade())
tst.AssertTrue(t, ph.Verify("test123", nil))
tst.AssertFalse(t, ph.Verify("test124", nil))
}
func TestPassHashUpgrade_V3_TOTP(t *testing.T) {
sec, err := totpext.GenerateSecret()
tst.AssertNoErr(t, err)
ph, err := HashPasswordV3("test123", sec)
tst.AssertNoErr(t, err)
tst.AssertTrue(t, ph.Valid())
tst.AssertTrue(t, ph.HasTOTP())
tst.AssertTrue(t, ph.NeedsPasswordUpgrade())
tst.AssertFalse(t, ph.Verify("test123", nil))
tst.AssertFalse(t, ph.Verify("test124", nil))
tst.AssertTrue(t, ph.Verify("test123", langext.Ptr(totpext.TOTP(sec))))
tst.AssertFalse(t, ph.Verify("test124", nil))
ph, err = ph.Upgrade("test123")
tst.AssertNoErr(t, err)
tst.AssertTrue(t, ph.Valid())
tst.AssertTrue(t, ph.HasTOTP())
tst.AssertFalse(t, ph.NeedsPasswordUpgrade())
tst.AssertFalse(t, ph.Verify("test123", nil))
tst.AssertFalse(t, ph.Verify("test124", nil))
tst.AssertTrue(t, ph.Verify("test123", langext.Ptr(totpext.TOTP(sec))))
tst.AssertFalse(t, ph.Verify("test124", nil))
}
func TestPassHashUpgrade_V4(t *testing.T) {
ph, err := HashPasswordV4("test123", nil)
tst.AssertNoErr(t, err)
tst.AssertTrue(t, ph.Valid())
tst.AssertFalse(t, ph.HasTOTP())
tst.AssertTrue(t, ph.NeedsPasswordUpgrade())
tst.AssertTrue(t, ph.Verify("test123", nil))
tst.AssertFalse(t, ph.Verify("test124", nil))
ph, err = ph.Upgrade("test123")
tst.AssertNoErr(t, err)
tst.AssertTrue(t, ph.Valid())
tst.AssertFalse(t, ph.HasTOTP())
tst.AssertFalse(t, ph.NeedsPasswordUpgrade())
tst.AssertTrue(t, ph.Verify("test123", nil))
tst.AssertFalse(t, ph.Verify("test124", nil))
}
func TestPassHashUpgrade_V4_TOTP(t *testing.T) {
sec, err := totpext.GenerateSecret()
tst.AssertNoErr(t, err)
ph, err := HashPasswordV4("test123", sec)
tst.AssertNoErr(t, err)
tst.AssertTrue(t, ph.Valid())
tst.AssertTrue(t, ph.HasTOTP())
tst.AssertTrue(t, ph.NeedsPasswordUpgrade())
tst.AssertFalse(t, ph.Verify("test123", nil))
tst.AssertFalse(t, ph.Verify("test124", nil))
tst.AssertTrue(t, ph.Verify("test123", langext.Ptr(totpext.TOTP(sec))))
tst.AssertFalse(t, ph.Verify("test124", nil))
ph, err = ph.Upgrade("test123")
tst.AssertNoErr(t, err)
tst.AssertTrue(t, ph.Valid())
tst.AssertTrue(t, ph.HasTOTP())
tst.AssertFalse(t, ph.NeedsPasswordUpgrade())
tst.AssertFalse(t, ph.Verify("test123", nil))
tst.AssertFalse(t, ph.Verify("test124", nil))
tst.AssertTrue(t, ph.Verify("test123", langext.Ptr(totpext.TOTP(sec))))
tst.AssertFalse(t, ph.Verify("test124", nil))
}

View File

@@ -0,0 +1,263 @@
package cryptext
import (
"crypto/rand"
"io"
"math/big"
mathrand "math/rand"
"strings"
)
const (
ppStartChar = "BCDFGHJKLMNPQRSTVWXZ"
ppEndChar = "ABDEFIKMNORSTUXYZ"
ppVowel = "AEIOUY"
ppConsonant = "BCDFGHJKLMNPQRSTVWXZ"
ppSegmentLenMin = 3
ppSegmentLenMax = 7
ppMaxRepeatedVowel = 2
ppMaxRepeatedConsonant = 2
)
var ppContinuation = map[uint8]string{
'A': "BCDFGHJKLMNPRSTVWXYZ",
'B': "ADFIKLMNORSTUY",
'C': "AEIKOUY",
'D': "AEILORSUYZ",
'E': "BCDFGHJKLMNPRSTVWXYZ",
'F': "ADEGIKLOPRTUY",
'G': "ABDEFHILMNORSTUY",
'H': "AEIOUY",
'I': "BCDFGHJKLMNPRSTVWXZ",
'J': "AEIOUY",
'K': "ADEFHILMNORSTUY",
'L': "ADEFGIJKMNOPSTUVWYZ",
'M': "ABEFIKOPSTUY",
'N': "ABEFIKOPSTUY",
'O': "BCDFGHJKLMNPRSTVWXYZ",
'P': "AEFIJLORSTUY",
'Q': "AEIOUY",
'R': "ADEFGHIJKLMNOPSTUVYZ",
'S': "ACDEIKLOPTUYZ",
'T': "AEHIJOPRSUWY",
'U': "BCDFGHJKLMNPRSTVWXZ",
'V': "AEIOUY",
'W': "AEIOUY",
'X': "AEIOUY",
'Y': "ABCDFGHKLMNPRSTVXZ",
'Z': "AEILOTUY",
}
var ppLog2Map = map[int]float64{
1: 0.00000000,
2: 1.00000000,
3: 1.58496250,
4: 2.00000000,
5: 2.32192809,
6: 2.58496250,
7: 2.80735492,
8: 3.00000000,
9: 3.16992500,
10: 3.32192809,
11: 3.45943162,
12: 3.58496250,
13: 3.70043972,
14: 3.80735492,
15: 3.90689060,
16: 4.00000000,
17: 4.08746284,
18: 4.16992500,
19: 4.24792751,
20: 4.32192809,
21: 4.39231742,
22: 4.45943162,
23: 4.52356196,
24: 4.58496250,
25: 4.64385619,
26: 4.70043972,
27: 4.75488750,
28: 4.80735492,
29: 4.85798100,
30: 4.90689060,
31: 4.95419631,
32: 5.00000000,
}
var (
ppVowelMap = ppMakeSet(ppVowel)
ppConsonantMap = ppMakeSet(ppConsonant)
ppEndCharMap = ppMakeSet(ppEndChar)
)
func ppMakeSet(v string) map[uint8]bool {
mp := make(map[uint8]bool, len(v))
for _, chr := range v {
mp[uint8(chr)] = true
}
return mp
}
func ppRandInt(rng io.Reader, max int) int {
v, err := rand.Int(rng, big.NewInt(int64(max)))
if err != nil {
panic(err)
}
return int(v.Int64())
}
func ppRand(rng io.Reader, chars string, entropy *float64) uint8 {
chr := chars[ppRandInt(rng, len(chars))]
*entropy = *entropy + ppLog2Map[len(chars)]
return chr
}
func ppCharType(chr uint8) (bool, bool) {
_, ok1 := ppVowelMap[chr]
_, ok2 := ppConsonantMap[chr]
return ok1, ok2
}
func ppCharsetRemove(cs string, set map[uint8]bool, allowEmpty bool) string {
result := ""
for _, chr := range cs {
if _, ok := set[uint8(chr)]; !ok {
result += string(chr)
}
}
if result == "" && !allowEmpty {
return cs
}
return result
}
func ppCharsetFilter(cs string, set map[uint8]bool, allowEmpty bool) string {
result := ""
for _, chr := range cs {
if _, ok := set[uint8(chr)]; ok {
result += string(chr)
}
}
if result == "" && !allowEmpty {
return cs
}
return result
}
func PronouncablePasswordExt(rng io.Reader, pwlen int) (string, float64) {
// kinda pseudo markov-chain - with a few extra rules and no weights...
if pwlen <= 0 {
return "", 0
}
vowelCount := 0
consoCount := 0
entropy := float64(0)
startChar := ppRand(rng, ppStartChar, &entropy)
result := string(startChar)
currentChar := startChar
isVowel, isConsonant := ppCharType(currentChar)
if isVowel {
vowelCount = 1
}
if isConsonant {
consoCount = ppMaxRepeatedConsonant
}
segmentLen := 1
segmentLenTarget := ppSegmentLenMin + ppRandInt(rng, ppSegmentLenMax-ppSegmentLenMin)
for len(result) < pwlen {
charset := ppContinuation[currentChar]
if vowelCount >= ppMaxRepeatedVowel {
charset = ppCharsetRemove(charset, ppVowelMap, false)
}
if consoCount >= ppMaxRepeatedConsonant {
charset = ppCharsetRemove(charset, ppConsonantMap, false)
}
lastOfSegment := false
newSegment := false
if len(result)+1 == pwlen {
// last of result
charset = ppCharsetFilter(charset, ppEndCharMap, false)
} else if segmentLen+1 == segmentLenTarget {
// last of segment
charsetNew := ppCharsetFilter(charset, ppEndCharMap, true)
if charsetNew != "" {
charset = charsetNew
lastOfSegment = true
}
} else if segmentLen >= segmentLenTarget {
// (perhaps) start of new segment
if _, ok := ppEndCharMap[currentChar]; ok {
charset = ppStartChar
newSegment = true
} else {
// continue segment for one more char to (hopefully) find an end-char
charsetNew := ppCharsetFilter(charset, ppEndCharMap, true)
if charsetNew != "" {
charset = charsetNew
lastOfSegment = true
}
}
} else {
// normal continuation
}
newChar := ppRand(rng, charset, &entropy)
if lastOfSegment {
currentChar = newChar
segmentLen++
result += strings.ToLower(string(newChar))
} else if newSegment {
currentChar = newChar
segmentLen = 1
result += strings.ToUpper(string(newChar))
segmentLenTarget = ppSegmentLenMin + ppRandInt(rng, ppSegmentLenMax-ppSegmentLenMin)
vowelCount = 0
consoCount = 0
} else {
currentChar = newChar
segmentLen++
result += strings.ToLower(string(newChar))
}
isVowel, isConsonant := ppCharType(currentChar)
if isVowel {
vowelCount++
consoCount = 0
}
if isConsonant {
vowelCount = 0
if newSegment {
consoCount = ppMaxRepeatedConsonant
} else {
consoCount++
}
}
}
return result, entropy
}
func PronouncablePassword(len int) string {
v, _ := PronouncablePasswordExt(rand.Reader, len)
return v
}
func PronouncablePasswordSeeded(seed int64, len int) string {
v, _ := PronouncablePasswordExt(mathrand.New(mathrand.NewSource(seed)), len)
return v
}

View File

@@ -0,0 +1,35 @@
package cryptext
import (
"fmt"
"math/rand"
"testing"
)
func TestPronouncablePasswordExt(t *testing.T) {
for i := 0; i < 20; i++ {
pw, entropy := PronouncablePasswordExt(rand.New(rand.NewSource(int64(i))), 16)
fmt.Printf("[%.2f] => %s\n", entropy, pw)
}
}
func TestPronouncablePasswordSeeded(t *testing.T) {
for i := 0; i < 20; i++ {
pw := PronouncablePasswordSeeded(int64(i), 8)
fmt.Printf("%s\n", pw)
}
}
func TestPronouncablePassword(t *testing.T) {
for i := 0; i < 20; i++ {
pw := PronouncablePassword(i + 1)
fmt.Printf("%s\n", pw)
}
}
func TestPronouncablePasswordWrongLen(t *testing.T) {
PronouncablePassword(0)
PronouncablePassword(-1)
PronouncablePassword(-2)
PronouncablePassword(-3)
}

27
ctxext/getter.go Normal file
View File

@@ -0,0 +1,27 @@
package ctxext
import "context"
func Value[T any](ctx context.Context, key any) (T, bool) {
v := ctx.Value(key)
if v == nil {
return *new(T), false
}
if tv, ok := v.(T); !ok {
return *new(T), false
} else {
return tv, true
}
}
func ValueOrDefault[T any](ctx context.Context, key any, def T) T {
v := ctx.Value(key)
if v == nil {
return def
}
if tv, ok := v.(T); !ok {
return def
} else {
return tv
}
}

18
cursortoken/direction.go Normal file
View File

@@ -0,0 +1,18 @@
package cursortoken
type SortDirection string //@enum:type
const (
SortASC SortDirection = "ASC"
SortDESC SortDirection = "DESC"
)
func (sd SortDirection) ToMongo() int {
if sd == SortASC {
return 1
} else if sd == SortDESC {
return -1
} else {
return 0
}
}

15
cursortoken/filter.go Normal file
View File

@@ -0,0 +1,15 @@
package cursortoken
import (
"context"
"go.mongodb.org/mongo-driver/mongo"
)
type RawFilter interface {
FilterQuery(ctx context.Context) mongo.Pipeline
}
type Filter interface {
FilterQuery(ctx context.Context) mongo.Pipeline
Pagination(ctx context.Context) (string, SortDirection, string, SortDirection)
}

95
cursortoken/token.go Normal file
View File

@@ -0,0 +1,95 @@
package cursortoken
import (
"encoding/base32"
"encoding/json"
"git.blackforestbytes.com/BlackForestBytes/goext/exerr"
"strconv"
"strings"
"time"
)
type CursorToken interface {
Token() string
IsStart() bool
IsEnd() bool
}
type Mode string
const (
CTMStart Mode = "START"
CTMNormal Mode = "NORMAL"
CTMEnd Mode = "END"
)
type Extra struct {
Timestamp *time.Time
Id *string
Page *int
PageSize *int
}
func Decode(tok string) (CursorToken, error) {
if tok == "" {
return Start(), nil
}
if strings.ToLower(tok) == "@start" {
return Start(), nil
}
if strings.ToLower(tok) == "@end" {
return End(), nil
}
if strings.ToLower(tok) == "$end" {
return PageEnd(), nil
}
if strings.HasPrefix(tok, "$") && len(tok) > 1 {
n, err := strconv.ParseInt(tok[1:], 10, 64)
if err != nil {
return nil, exerr.Wrap(err, "failed to deserialize token").Str("token", tok).WithType(exerr.TypeCursorTokenDecode).Build()
}
return Page(int(n)), nil
}
if strings.HasPrefix(tok, "tok_") {
body, err := base32.StdEncoding.DecodeString(tok[len("tok_"):])
if err != nil {
return nil, err
}
var tokenDeserialize cursorTokenKeySortSerialize
err = json.Unmarshal(body, &tokenDeserialize)
if err != nil {
return nil, exerr.Wrap(err, "failed to deserialize token").Str("token", tok).WithType(exerr.TypeCursorTokenDecode).Build()
}
token := CTKeySort{Mode: CTMNormal}
if tokenDeserialize.ValuePrimary != nil {
token.ValuePrimary = *tokenDeserialize.ValuePrimary
}
if tokenDeserialize.ValueSecondary != nil {
token.ValueSecondary = *tokenDeserialize.ValueSecondary
}
if tokenDeserialize.Direction != nil {
token.Direction = *tokenDeserialize.Direction
}
if tokenDeserialize.DirectionSecondary != nil {
token.DirectionSecondary = *tokenDeserialize.DirectionSecondary
}
if tokenDeserialize.PageSize != nil {
token.PageSize = *tokenDeserialize.PageSize
}
token.Extra.Timestamp = tokenDeserialize.ExtraTimestamp
token.Extra.Id = tokenDeserialize.ExtraId
token.Extra.Page = tokenDeserialize.ExtraPage
token.Extra.PageSize = tokenDeserialize.ExtraPageSize
return token, nil
} else {
return nil, exerr.New(exerr.TypeCursorTokenDecode, "could not decode token, missing/unknown prefix").Str("token", tok).Build()
}
}

136
cursortoken/tokenKeySort.go Normal file
View File

@@ -0,0 +1,136 @@
package cursortoken
import (
"encoding/base32"
"encoding/json"
"go.mongodb.org/mongo-driver/bson/primitive"
"time"
)
type CTKeySort struct {
Mode Mode
ValuePrimary string
ValueSecondary string
Direction SortDirection
DirectionSecondary SortDirection
PageSize int
Extra Extra
}
type cursorTokenKeySortSerialize struct {
ValuePrimary *string `json:"v1,omitempty"`
ValueSecondary *string `json:"v2,omitempty"`
Direction *SortDirection `json:"dir,omitempty"`
DirectionSecondary *SortDirection `json:"dir2,omitempty"`
PageSize *int `json:"size,omitempty"`
ExtraTimestamp *time.Time `json:"ts,omitempty"`
ExtraId *string `json:"id,omitempty"`
ExtraPage *int `json:"pg,omitempty"`
ExtraPageSize *int `json:"sz,omitempty"`
}
func NewKeySortToken(valuePrimary string, valueSecondary string, direction SortDirection, directionSecondary SortDirection, pageSize int, extra Extra) CursorToken {
return CTKeySort{
Mode: CTMNormal,
ValuePrimary: valuePrimary,
ValueSecondary: valueSecondary,
Direction: direction,
DirectionSecondary: directionSecondary,
PageSize: pageSize,
Extra: extra,
}
}
func Start() CursorToken {
return CTKeySort{
Mode: CTMStart,
ValuePrimary: "",
ValueSecondary: "",
Direction: "",
DirectionSecondary: "",
PageSize: 0,
Extra: Extra{},
}
}
func End() CursorToken {
return CTKeySort{
Mode: CTMEnd,
ValuePrimary: "",
ValueSecondary: "",
Direction: "",
DirectionSecondary: "",
PageSize: 0,
Extra: Extra{},
}
}
func (c CTKeySort) Token() string {
if c.Mode == CTMStart {
return "@start"
}
if c.Mode == CTMEnd {
return "@end"
}
// We kinda manually implement omitempty for the CursorToken here
// because omitempty does not work for time.Time and otherwise we would always
// get weird time values when decoding a token that initially didn't have an Timestamp set
// For this usecase we treat Unix=0 as an empty timestamp
sertok := cursorTokenKeySortSerialize{}
if c.ValuePrimary != "" {
sertok.ValuePrimary = &c.ValuePrimary
}
if c.ValueSecondary != "" {
sertok.ValueSecondary = &c.ValueSecondary
}
if c.Direction != "" {
sertok.Direction = &c.Direction
}
if c.DirectionSecondary != "" {
sertok.DirectionSecondary = &c.DirectionSecondary
}
if c.PageSize != 0 {
sertok.PageSize = &c.PageSize
}
sertok.ExtraTimestamp = c.Extra.Timestamp
sertok.ExtraId = c.Extra.Id
sertok.ExtraPage = c.Extra.Page
sertok.ExtraPageSize = c.Extra.PageSize
body, err := json.Marshal(sertok)
if err != nil {
panic(err)
}
return "tok_" + base32.StdEncoding.EncodeToString(body)
}
func (c CTKeySort) IsEnd() bool {
return c.Mode == CTMEnd
}
func (c CTKeySort) IsStart() bool {
return c.Mode == CTMStart
}
func (c CTKeySort) valuePrimaryObjectId() (primitive.ObjectID, bool) {
if oid, err := primitive.ObjectIDFromHex(c.ValuePrimary); err == nil {
return oid, true
} else {
return primitive.ObjectID{}, false
}
}
func (c CTKeySort) valueSecondaryObjectId() (primitive.ObjectID, bool) {
if oid, err := primitive.ObjectIDFromHex(c.ValueSecondary); err == nil {
return oid, true
} else {
return primitive.ObjectID{}, false
}
}

View File

@@ -0,0 +1,41 @@
package cursortoken
import "strconv"
type CTPaginated struct {
Mode Mode
Page int
}
func Page(p int) CursorToken {
return CTPaginated{
Mode: CTMNormal,
Page: p,
}
}
func PageEnd() CursorToken {
return CTPaginated{
Mode: CTMEnd,
Page: 0,
}
}
func (c CTPaginated) Token() string {
if c.Mode == CTMStart {
return "$1"
}
if c.Mode == CTMEnd {
return "$end"
}
return "$" + strconv.Itoa(c.Page)
}
func (c CTPaginated) IsEnd() bool {
return c.Mode == CTMEnd
}
func (c CTPaginated) IsStart() bool {
return c.Mode == CTMStart || c.Page == 1
}

View File

@@ -115,6 +115,9 @@ func (b *bufferedReadCloser) BufferedAll() ([]byte, error) {
return nil, err return nil, err
} }
} }
if err := b.Reset(); err != nil {
return nil, err
}
return b.buffer, nil return b.buffer, nil
case modeSourceFinished: case modeSourceFinished:
@@ -131,10 +134,22 @@ func (b *bufferedReadCloser) BufferedAll() ([]byte, error) {
} }
} }
// Reset resets the buffer to the beginning of the buffer.
// If the original source is partially read, we will finish reading it and fill our buffer
func (b *bufferedReadCloser) Reset() error { func (b *bufferedReadCloser) Reset() error {
switch b.mode { switch b.mode {
case modeSourceReading: case modeSourceReading:
fallthrough if b.off == 0 {
return nil // nobody has read anything yet
}
err := b.Close()
if err != nil {
return err
}
b.mode = modeBufferReading
b.off = 0
return nil
case modeSourceFinished: case modeSourceFinished:
err := b.Close() err := b.Close()
if err != nil { if err != nil {

254
dataext/casMutex.go Normal file
View File

@@ -0,0 +1,254 @@
package dataext
import (
"context"
"golang.org/x/sync/semaphore"
"runtime"
"sync"
"sync/atomic"
"time"
"unsafe"
)
// from https://github.com/viney-shih/go-lock/blob/2f19fd8ce335e33e0ab9dccb1ff2ce820c3da332/cas.go
// CASMutex is the struct implementing RWMutex with CAS mechanism.
type CASMutex struct {
state casState
turnstile *semaphore.Weighted
broadcastChan chan struct{}
broadcastMut sync.RWMutex
}
func NewCASMutex() *CASMutex {
return &CASMutex{
state: casStateNoLock,
turnstile: semaphore.NewWeighted(1),
broadcastChan: make(chan struct{}),
}
}
type casState int32
const (
casStateUndefined casState = iota - 2 // -2
casStateWriteLock // -1
casStateNoLock // 0
casStateReadLock // >= 1
)
func (m *CASMutex) getState(n int32) casState {
switch st := casState(n); {
case st == casStateWriteLock:
fallthrough
case st == casStateNoLock:
return st
case st >= casStateReadLock:
return casStateReadLock
default:
// actually, it should not happened.
return casStateUndefined
}
}
func (m *CASMutex) listen() <-chan struct{} {
m.broadcastMut.RLock()
defer m.broadcastMut.RUnlock()
return m.broadcastChan
}
func (m *CASMutex) broadcast() {
newCh := make(chan struct{})
m.broadcastMut.Lock()
ch := m.broadcastChan
m.broadcastChan = newCh
m.broadcastMut.Unlock()
close(ch)
}
func (m *CASMutex) tryLock(ctx context.Context) bool {
for {
broker := m.listen()
if atomic.CompareAndSwapInt32(
(*int32)(unsafe.Pointer(&m.state)),
int32(casStateNoLock),
int32(casStateWriteLock),
) {
return true
}
if ctx == nil {
return false
}
select {
case <-ctx.Done():
// timeout or cancellation
return false
case <-broker:
// waiting for signal triggered by m.broadcast() and trying again.
}
}
}
// TryLockWithContext attempts to acquire the lock, blocking until resources
// are available or ctx is done (timeout or cancellation).
func (m *CASMutex) TryLockWithContext(ctx context.Context) bool {
if err := m.turnstile.Acquire(ctx, 1); err != nil {
// Acquire failed due to timeout or cancellation
return false
}
defer m.turnstile.Release(1)
return m.tryLock(ctx)
}
// Lock acquires the lock.
// If it is currently held by others, Lock will wait until it has a chance to acquire it.
func (m *CASMutex) Lock() {
ctx := context.Background()
m.TryLockWithContext(ctx)
}
// TryLock attempts to acquire the lock without blocking.
// Return false if someone is holding it now.
func (m *CASMutex) TryLock() bool {
if !m.turnstile.TryAcquire(1) {
return false
}
defer m.turnstile.Release(1)
return m.tryLock(nil)
}
// TryLockWithTimeout attempts to acquire the lock within a period of time.
// Return false if spending time is more than duration and no chance to acquire it.
func (m *CASMutex) TryLockWithTimeout(duration time.Duration) bool {
ctx, cancel := context.WithTimeout(context.Background(), duration)
defer cancel()
return m.TryLockWithContext(ctx)
}
// Unlock releases the lock.
func (m *CASMutex) Unlock() {
if ok := atomic.CompareAndSwapInt32(
(*int32)(unsafe.Pointer(&m.state)),
int32(casStateWriteLock),
int32(casStateNoLock),
); !ok {
panic("Unlock failed")
}
m.broadcast()
}
func (m *CASMutex) rTryLock(ctx context.Context) bool {
for {
broker := m.listen()
n := atomic.LoadInt32((*int32)(unsafe.Pointer(&m.state)))
st := m.getState(n)
switch st {
case casStateNoLock, casStateReadLock:
if atomic.CompareAndSwapInt32((*int32)(unsafe.Pointer(&m.state)), n, n+1) {
return true
}
}
if ctx == nil {
return false
}
select {
case <-ctx.Done():
// timeout or cancellation
return false
default:
switch st {
// read-lock failed due to concurrence issue, try again immediately
case casStateNoLock, casStateReadLock:
runtime.Gosched() // allow other goroutines to do stuff.
continue
}
}
select {
case <-ctx.Done():
// timeout or cancellation
return false
case <-broker:
// waiting for signal triggered by m.broadcast() and trying again.
}
}
}
// RTryLockWithContext attempts to acquire the read lock, blocking until resources
// are available or ctx is done (timeout or cancellation).
func (m *CASMutex) RTryLockWithContext(ctx context.Context) bool {
if err := m.turnstile.Acquire(ctx, 1); err != nil {
// Acquire failed due to timeout or cancellation
return false
}
m.turnstile.Release(1)
return m.rTryLock(ctx)
}
// RLock acquires the read lock.
// If it is currently held by others writing, RLock will wait until it has a chance to acquire it.
func (m *CASMutex) RLock() {
ctx := context.Background()
m.RTryLockWithContext(ctx)
}
// RTryLock attempts to acquire the read lock without blocking.
// Return false if someone is writing it now.
func (m *CASMutex) RTryLock() bool {
if !m.turnstile.TryAcquire(1) {
return false
}
m.turnstile.Release(1)
return m.rTryLock(nil)
}
// RTryLockWithTimeout attempts to acquire the read lock within a period of time.
// Return false if spending time is more than duration and no chance to acquire it.
func (m *CASMutex) RTryLockWithTimeout(duration time.Duration) bool {
ctx, cancel := context.WithTimeout(context.Background(), duration)
defer cancel()
return m.RTryLockWithContext(ctx)
}
// RUnlock releases the read lock.
func (m *CASMutex) RUnlock() {
n := atomic.AddInt32((*int32)(unsafe.Pointer(&m.state)), -1)
switch m.getState(n) {
case casStateUndefined, casStateWriteLock:
panic("RUnlock failed")
case casStateNoLock:
m.broadcast()
}
}
// RLocker returns a Locker interface that implements the Lock and Unlock methods
// by calling CASMutex.RLock and CASMutex.RUnlock.
func (m *CASMutex) RLocker() sync.Locker {
return (*rlocker)(m)
}
type rlocker CASMutex
func (r *rlocker) Lock() { (*CASMutex)(r).RLock() }
func (r *rlocker) Unlock() { (*CASMutex)(r).RUnlock() }

View File

@@ -0,0 +1,132 @@
package dataext
import (
"context"
"git.blackforestbytes.com/BlackForestBytes/goext/langext"
"git.blackforestbytes.com/BlackForestBytes/goext/syncext"
"sync"
"time"
)
type DelayedCombiningInvoker struct {
syncLock sync.Mutex
triggerChan chan bool
cancelChan chan bool
execNowChan chan bool
action func()
delay time.Duration
maxDelay time.Duration
executorRunning *syncext.AtomicBool
lastRequestTime time.Time
initialRequestTime time.Time
}
func NewDelayedCombiningInvoker(action func(), delay time.Duration, maxDelay time.Duration) *DelayedCombiningInvoker {
return &DelayedCombiningInvoker{
action: action,
delay: delay,
maxDelay: maxDelay,
executorRunning: syncext.NewAtomicBool(false),
triggerChan: make(chan bool),
cancelChan: make(chan bool, 1),
execNowChan: make(chan bool, 1),
lastRequestTime: time.Now(),
initialRequestTime: time.Now(),
}
}
func (d *DelayedCombiningInvoker) Request() {
now := time.Now()
d.syncLock.Lock()
defer d.syncLock.Unlock()
if d.executorRunning.Get() {
d.lastRequestTime = now
d.triggerChan <- true
} else {
d.initialRequestTime = now
d.lastRequestTime = now
d.executorRunning.Set(true)
syncext.ReadNonBlocking(d.triggerChan) // clear the channel
syncext.ReadNonBlocking(d.cancelChan) // clear the channel
syncext.ReadNonBlocking(d.execNowChan) // clear the channel
go d.run()
}
}
func (d *DelayedCombiningInvoker) run() {
defer func() {
d.syncLock.Lock()
d.executorRunning.Set(false)
d.syncLock.Unlock()
}()
for {
d.syncLock.Lock()
timeOut := min(d.maxDelay-time.Since(d.initialRequestTime), d.delay-time.Since(d.lastRequestTime))
if timeOut < 0 {
timeOut = 0
}
d.syncLock.Unlock()
immediately := false
select {
case <-d.execNowChan:
// run immediately
immediately = true
break
case <-d.triggerChan:
// external trigger - needs to re-evaluate
break
case <-d.cancelChan:
// cancel
return
case <-time.After(timeOut):
// time elapsed - check for execution
break
}
d.syncLock.Lock()
execute := immediately || time.Since(d.lastRequestTime) >= d.delay || time.Since(d.initialRequestTime) >= d.maxDelay
if !execute {
d.syncLock.Unlock()
continue
}
_ = langext.RunPanicSafe(d.action)
d.syncLock.Unlock()
return
}
}
func (d *DelayedCombiningInvoker) CancelPendingRequests() {
d.syncLock.Lock()
defer d.syncLock.Unlock()
syncext.WriteNonBlocking(d.cancelChan, true)
}
func (d *DelayedCombiningInvoker) HasPendingRequests() bool {
return d.executorRunning.Get()
}
func (d *DelayedCombiningInvoker) ExecuteNow() bool {
d.syncLock.Lock()
defer d.syncLock.Unlock()
if d.executorRunning.Get() {
syncext.WriteNonBlocking(d.execNowChan, true)
return true
} else {
return false
}
}
func (d *DelayedCombiningInvoker) WaitForCompletion(ctx context.Context) error {
return d.executorRunning.WaitWithContext(ctx, false)
}

View File

@@ -19,38 +19,38 @@ import (
// There are also a bunch of unit tests to ensure that the cache is always in a consistent state // There are also a bunch of unit tests to ensure that the cache is always in a consistent state
// //
type LRUMap[TData any] struct { type LRUMap[TKey comparable, TData any] struct {
maxsize int maxsize int
lock sync.Mutex lock sync.Mutex
cache map[string]*cacheNode[TData] cache map[TKey]*cacheNode[TKey, TData]
lfuHead *cacheNode[TData] lfuHead *cacheNode[TKey, TData]
lfuTail *cacheNode[TData] lfuTail *cacheNode[TKey, TData]
} }
type cacheNode[TData any] struct { type cacheNode[TKey comparable, TData any] struct {
key string key TKey
data TData data TData
parent *cacheNode[TData] parent *cacheNode[TKey, TData]
child *cacheNode[TData] child *cacheNode[TKey, TData]
} }
func NewLRUMap[TData any](size int) *LRUMap[TData] { func NewLRUMap[TKey comparable, TData any](size int) *LRUMap[TKey, TData] {
if size <= 2 && size != 0 { if size <= 2 && size != 0 {
panic("Size must be > 2 (or 0)") panic("Size must be > 2 (or 0)")
} }
return &LRUMap[TData]{ return &LRUMap[TKey, TData]{
maxsize: size, maxsize: size,
lock: sync.Mutex{}, lock: sync.Mutex{},
cache: make(map[string]*cacheNode[TData], size+1), cache: make(map[TKey]*cacheNode[TKey, TData], size+1),
lfuHead: nil, lfuHead: nil,
lfuTail: nil, lfuTail: nil,
} }
} }
func (c *LRUMap[TData]) Put(key string, value TData) { func (c *LRUMap[TKey, TData]) Put(key TKey, value TData) {
if c.maxsize == 0 { if c.maxsize == 0 {
return // cache disabled return // cache disabled
} }
@@ -68,7 +68,7 @@ func (c *LRUMap[TData]) Put(key string, value TData) {
} }
// key does not exist: insert into map and add to top of LFU // key does not exist: insert into map and add to top of LFU
node = &cacheNode[TData]{ node = &cacheNode[TKey, TData]{
key: key, key: key,
data: value, data: value,
parent: nil, parent: nil,
@@ -93,7 +93,7 @@ func (c *LRUMap[TData]) Put(key string, value TData) {
} }
} }
func (c *LRUMap[TData]) TryGet(key string) (TData, bool) { func (c *LRUMap[TKey, TData]) TryGet(key TKey) (TData, bool) {
if c.maxsize == 0 { if c.maxsize == 0 {
return *new(TData), false // cache disabled return *new(TData), false // cache disabled
} }
@@ -109,7 +109,7 @@ func (c *LRUMap[TData]) TryGet(key string) (TData, bool) {
return val.data, ok return val.data, ok
} }
func (c *LRUMap[TData]) moveNodeToTop(node *cacheNode[TData]) { func (c *LRUMap[TKey, TData]) moveNodeToTop(node *cacheNode[TKey, TData]) {
// (only called in critical section !) // (only called in critical section !)
if c.lfuHead == node { // fast case if c.lfuHead == node { // fast case
@@ -142,7 +142,7 @@ func (c *LRUMap[TData]) moveNodeToTop(node *cacheNode[TData]) {
} }
} }
func (c *LRUMap[TData]) Size() int { func (c *LRUMap[TKey, TData]) Size() int {
c.lock.Lock() c.lock.Lock()
defer c.lock.Unlock() defer c.lock.Unlock()
return len(c.cache) return len(c.cache)

View File

@@ -1,7 +1,7 @@
package dataext package dataext
import ( import (
"gogs.mikescher.com/BlackForestBytes/goext/langext" "git.blackforestbytes.com/BlackForestBytes/goext/langext"
"math/rand" "math/rand"
"strconv" "strconv"
"testing" "testing"
@@ -12,7 +12,7 @@ func init() {
} }
func TestResultCache1(t *testing.T) { func TestResultCache1(t *testing.T) {
cache := NewLRUMap[string](8) cache := NewLRUMap[string, string](8)
verifyLRUList(cache, t) verifyLRUList(cache, t)
key := randomKey() key := randomKey()
@@ -50,7 +50,7 @@ func TestResultCache1(t *testing.T) {
} }
func TestResultCache2(t *testing.T) { func TestResultCache2(t *testing.T) {
cache := NewLRUMap[string](8) cache := NewLRUMap[string, string](8)
verifyLRUList(cache, t) verifyLRUList(cache, t)
key1 := "key1" key1 := "key1"
@@ -150,7 +150,7 @@ func TestResultCache2(t *testing.T) {
} }
func TestResultCache3(t *testing.T) { func TestResultCache3(t *testing.T) {
cache := NewLRUMap[string](8) cache := NewLRUMap[string, string](8)
verifyLRUList(cache, t) verifyLRUList(cache, t)
key1 := "key1" key1 := "key1"
@@ -173,7 +173,7 @@ func TestResultCache3(t *testing.T) {
} }
// does a basic consistency check over the internal cache representation // does a basic consistency check over the internal cache representation
func verifyLRUList[TData any](cache *LRUMap[TData], t *testing.T) { func verifyLRUList[TKey comparable, TData any](cache *LRUMap[TKey, TData], t *testing.T) {
size := 0 size := 0
tailFound := false tailFound := false

View File

@@ -1,7 +1,8 @@
package dataext package dataext
import ( import (
"gogs.mikescher.com/BlackForestBytes/goext/langext" "git.blackforestbytes.com/BlackForestBytes/goext/langext"
"git.blackforestbytes.com/BlackForestBytes/goext/tst"
"testing" "testing"
) )
@@ -43,10 +44,10 @@ func TestObjectMerge(t *testing.T) {
valueMerge := ObjectMerge(valueA, valueB) valueMerge := ObjectMerge(valueA, valueB)
assertPtrEqual(t, "Field1", valueMerge.Field1, valueB.Field1) tst.AssertIdentPtrEqual(t, "Field1", valueMerge.Field1, valueB.Field1)
assertPtrEqual(t, "Field2", valueMerge.Field2, valueA.Field2) tst.AssertIdentPtrEqual(t, "Field2", valueMerge.Field2, valueA.Field2)
assertPtrEqual(t, "Field3", valueMerge.Field3, valueB.Field3) tst.AssertIdentPtrEqual(t, "Field3", valueMerge.Field3, valueB.Field3)
assertPtrEqual(t, "Field4", valueMerge.Field4, nil) tst.AssertIdentPtrEqual(t, "Field4", valueMerge.Field4, nil)
} }

83
dataext/optional.go Normal file
View File

@@ -0,0 +1,83 @@
package dataext
import (
"encoding/json"
"errors"
"git.blackforestbytes.com/BlackForestBytes/goext/langext"
)
type JsonOpt[T any] struct {
isSet bool
value T
}
func NewJsonOpt[T any](v T) JsonOpt[T] {
return JsonOpt[T]{isSet: true, value: v}
}
func EmptyJsonOpt[T any]() JsonOpt[T] {
return JsonOpt[T]{isSet: false}
}
// MarshalJSON returns m as the JSON encoding of m.
func (m JsonOpt[T]) MarshalJSON() ([]byte, error) {
if !m.isSet {
return []byte("null"), nil // actually this would be undefined - but undefined is not valid JSON
}
return json.Marshal(m.value)
}
// UnmarshalJSON sets *m to a copy of data.
func (m *JsonOpt[T]) UnmarshalJSON(data []byte) error {
if m == nil {
return errors.New("JsonOpt: UnmarshalJSON on nil pointer")
}
m.isSet = true
return json.Unmarshal(data, &m.value)
}
func (m JsonOpt[T]) IsSet() bool {
return m.isSet
}
func (m JsonOpt[T]) IsUnset() bool {
return !m.isSet
}
func (m JsonOpt[T]) Value() (T, bool) {
if !m.isSet {
return *new(T), false
}
return m.value, true
}
func (m JsonOpt[T]) ValueOrNil() *T {
if !m.isSet {
return nil
}
return &m.value
}
func (m JsonOpt[T]) ValueDblPtrOrNil() **T {
if !m.isSet {
return nil
}
return langext.DblPtr(m.value)
}
func (m JsonOpt[T]) MustValue() T {
if !m.isSet {
panic("value not set")
}
return m.value
}
func (m JsonOpt[T]) IfSet(fn func(v T)) bool {
if !m.isSet {
return false
}
fn(m.value)
return true
}

241
dataext/pubsub.go Normal file
View File

@@ -0,0 +1,241 @@
package dataext
import (
"context"
"git.blackforestbytes.com/BlackForestBytes/goext/langext"
"git.blackforestbytes.com/BlackForestBytes/goext/syncext"
"github.com/rs/xid"
"iter"
"sync"
"time"
)
// PubSub is a simple Pub/Sub Broker
// Clients can subscribe to a namespace and receive published messages on this namespace
// Messages are broadcast to all subscribers
type PubSub[TNamespace comparable, TData any] struct {
masterLock *sync.Mutex
subscriptions map[TNamespace][]*pubSubSubscription[TNamespace, TData]
}
type PubSubSubscription interface {
Unsubscribe()
}
type pubSubSubscription[TNamespace comparable, TData any] struct {
ID string
parent *PubSub[TNamespace, TData]
namespace TNamespace
subLock *sync.Mutex
Func func(TData)
Chan chan TData
UnsubChan chan bool
}
func (p *pubSubSubscription[TNamespace, TData]) Unsubscribe() {
p.parent.unsubscribe(p)
}
func NewPubSub[TNamespace comparable, TData any](capacity int) *PubSub[TNamespace, TData] {
return &PubSub[TNamespace, TData]{
masterLock: &sync.Mutex{},
subscriptions: make(map[TNamespace][]*pubSubSubscription[TNamespace, TData], capacity),
}
}
func (ps *PubSub[TNamespace, TData]) Namespaces() []TNamespace {
ps.masterLock.Lock()
defer ps.masterLock.Unlock()
return langext.MapKeyArr(ps.subscriptions)
}
func (ps *PubSub[TNamespace, TData]) SubscriberCount(ns TNamespace) int {
ps.masterLock.Lock()
defer ps.masterLock.Unlock()
return len(ps.subscriptions[ns])
}
// Publish sends `data` to all subscriber
// But unbuffered - if one is currently not listening, we skip (the actualReceiver < subscriber)
func (ps *PubSub[TNamespace, TData]) Publish(ns TNamespace, data TData) (subscriber int, actualReceiver int) {
ps.masterLock.Lock()
subs := langext.ArrCopy(ps.subscriptions[ns])
ps.masterLock.Unlock()
subscriber = len(subs)
actualReceiver = 0
for _, sub := range subs {
func() {
sub.subLock.Lock()
defer sub.subLock.Unlock()
if sub.Func != nil {
go func() { sub.Func(data) }()
actualReceiver++
} else if sub.Chan != nil {
msgSent := syncext.WriteNonBlocking(sub.Chan, data)
if msgSent {
actualReceiver++
}
}
}()
}
return subscriber, actualReceiver
}
// PublishWithContext sends `data` to all subscriber
// buffered - if one is currently not listening, we wait (but error out when the context runs out)
func (ps *PubSub[TNamespace, TData]) PublishWithContext(ctx context.Context, ns TNamespace, data TData) (subscriber int, actualReceiver int, err error) {
ps.masterLock.Lock()
subs := langext.ArrCopy(ps.subscriptions[ns])
ps.masterLock.Unlock()
subscriber = len(subs)
actualReceiver = 0
for _, sub := range subs {
err := func() error {
sub.subLock.Lock()
defer sub.subLock.Unlock()
if err := ctx.Err(); err != nil {
return err
}
if sub.Func != nil {
go func() { sub.Func(data) }()
actualReceiver++
} else if sub.Chan != nil {
err := syncext.WriteChannelWithContext(ctx, sub.Chan, data)
if err != nil {
return err
}
actualReceiver++
}
return nil
}()
if err != nil {
return subscriber, actualReceiver, err
}
}
return subscriber, actualReceiver, nil
}
// PublishWithTimeout sends `data` to all subscriber
// buffered - if one is currently not listening, we wait (but wait at most `timeout` - if the timeout is exceeded then actualReceiver < subscriber)
func (ps *PubSub[TNamespace, TData]) PublishWithTimeout(ns TNamespace, data TData, timeout time.Duration) (subscriber int, actualReceiver int) {
ps.masterLock.Lock()
subs := langext.ArrCopy(ps.subscriptions[ns])
ps.masterLock.Unlock()
subscriber = len(subs)
actualReceiver = 0
for _, sub := range subs {
func() {
sub.subLock.Lock()
defer sub.subLock.Unlock()
if sub.Func != nil {
go func() { sub.Func(data) }()
actualReceiver++
} else if sub.Chan != nil {
ok := syncext.WriteChannelWithTimeout(sub.Chan, data, timeout)
if ok {
actualReceiver++
}
}
}()
}
return subscriber, actualReceiver
}
func (ps *PubSub[TNamespace, TData]) SubscribeByCallback(ns TNamespace, fn func(TData)) PubSubSubscription {
ps.masterLock.Lock()
defer ps.masterLock.Unlock()
sub := &pubSubSubscription[TNamespace, TData]{ID: xid.New().String(), namespace: ns, parent: ps, subLock: &sync.Mutex{}, Func: fn, UnsubChan: nil}
ps.subscriptions[ns] = append(ps.subscriptions[ns], sub)
return sub
}
func (ps *PubSub[TNamespace, TData]) SubscribeByChan(ns TNamespace, chanBufferSize int) (chan TData, PubSubSubscription) {
ps.masterLock.Lock()
defer ps.masterLock.Unlock()
msgCh := make(chan TData, chanBufferSize)
sub := &pubSubSubscription[TNamespace, TData]{ID: xid.New().String(), namespace: ns, parent: ps, subLock: &sync.Mutex{}, Chan: msgCh, UnsubChan: nil}
ps.subscriptions[ns] = append(ps.subscriptions[ns], sub)
return msgCh, sub
}
func (ps *PubSub[TNamespace, TData]) SubscribeByIter(ns TNamespace, chanBufferSize int) (iter.Seq[TData], PubSubSubscription) {
ps.masterLock.Lock()
defer ps.masterLock.Unlock()
msgCh := make(chan TData, chanBufferSize)
unsubChan := make(chan bool, 8)
sub := &pubSubSubscription[TNamespace, TData]{ID: xid.New().String(), namespace: ns, parent: ps, subLock: &sync.Mutex{}, Chan: msgCh, UnsubChan: unsubChan}
ps.subscriptions[ns] = append(ps.subscriptions[ns], sub)
iterFun := func(yield func(TData) bool) {
for {
select {
case msg := <-msgCh:
if !yield(msg) {
sub.Unsubscribe()
return
}
case <-sub.UnsubChan:
sub.Unsubscribe()
return
}
}
}
return iterFun, sub
}
func (ps *PubSub[TNamespace, TData]) unsubscribe(p *pubSubSubscription[TNamespace, TData]) {
ps.masterLock.Lock()
defer ps.masterLock.Unlock()
p.subLock.Lock()
defer p.subLock.Unlock()
if p.Chan != nil {
close(p.Chan)
p.Chan = nil
}
if p.UnsubChan != nil {
syncext.WriteNonBlocking(p.UnsubChan, true)
close(p.UnsubChan)
p.UnsubChan = nil
}
ps.subscriptions[p.namespace] = langext.ArrFilter(ps.subscriptions[p.namespace], func(v *pubSubSubscription[TNamespace, TData]) bool {
return v.ID != p.ID
})
if len(ps.subscriptions[p.namespace]) == 0 {
delete(ps.subscriptions, p.namespace)
}
}

428
dataext/pubsub_test.go Normal file
View File

@@ -0,0 +1,428 @@
package dataext
import (
"context"
"sync"
"testing"
"time"
)
func TestNewPubSub(t *testing.T) {
ps := NewPubSub[string, string](10)
if ps == nil {
t.Fatal("NewPubSub returned nil")
}
if ps.masterLock == nil {
t.Fatal("masterLock is nil")
}
if ps.subscriptions == nil {
t.Fatal("subscriptions is nil")
}
}
func TestPubSub_Namespaces(t *testing.T) {
ps := NewPubSub[string, string](10)
// Initially no namespaces
namespaces := ps.Namespaces()
if len(namespaces) != 0 {
t.Fatalf("Expected 0 namespaces, got %d", len(namespaces))
}
// Add a subscription to create a namespace
_, sub1 := ps.SubscribeByChan("test-ns1", 1)
defer sub1.Unsubscribe()
// Add another subscription to a different namespace
_, sub2 := ps.SubscribeByChan("test-ns2", 1)
defer sub2.Unsubscribe()
// Check namespaces
namespaces = ps.Namespaces()
if len(namespaces) != 2 {
t.Fatalf("Expected 2 namespaces, got %d", len(namespaces))
}
// Check if namespaces contain the expected values
found1, found2 := false, false
for _, ns := range namespaces {
if ns == "test-ns1" {
found1 = true
}
if ns == "test-ns2" {
found2 = true
}
}
if !found1 || !found2 {
t.Fatalf("Expected to find both namespaces, found ns1: %v, ns2: %v", found1, found2)
}
}
func TestPubSub_SubscribeByCallback(t *testing.T) {
ps := NewPubSub[string, string](10)
var received string
var wg sync.WaitGroup
wg.Add(1)
callback := func(msg string) {
received = msg
wg.Done()
}
sub := ps.SubscribeByCallback("test-ns", callback)
defer sub.Unsubscribe()
// Publish a message
subs, receivers := ps.Publish("test-ns", "hello")
if subs != 1 {
t.Fatalf("Expected 1 subscriber, got %d", subs)
}
if receivers != 1 {
t.Fatalf("Expected 1 receiver, got %d", receivers)
}
// Wait for the callback to be executed
wg.Wait()
if received != "hello" {
t.Fatalf("Expected to receive 'hello', got '%s'", received)
}
}
func TestPubSub_SubscribeByChan(t *testing.T) {
ps := NewPubSub[string, string](10)
ch, sub := ps.SubscribeByChan("test-ns", 1)
defer sub.Unsubscribe()
// Publish a message
subs, receivers := ps.Publish("test-ns", "hello")
if subs != 1 {
t.Fatalf("Expected 1 subscriber, got %d", subs)
}
if receivers != 1 {
t.Fatalf("Expected 1 receiver, got %d", receivers)
}
// Read from the channel with a timeout to avoid blocking
select {
case msg := <-ch:
if msg != "hello" {
t.Fatalf("Expected to receive 'hello', got '%s'", msg)
}
case <-time.After(time.Second):
t.Fatal("Timed out waiting for message")
}
}
func TestPubSub_SubscribeByIter(t *testing.T) {
ps := NewPubSub[string, string](10)
iterSeq, sub := ps.SubscribeByIter("test-ns", 1)
defer sub.Unsubscribe()
// Channel to communicate when message is received
done := make(chan bool)
received := false
// Start a goroutine to use the iterator
go func() {
for msg := range iterSeq {
if msg == "hello" {
received = true
done <- true
return // Stop iteration
}
}
}()
// Give time for the iterator to start
time.Sleep(100 * time.Millisecond)
// Publish a message
ps.Publish("test-ns", "hello")
// Wait for the message to be received or timeout
select {
case <-done:
if !received {
t.Fatal("Message was received but not 'hello'")
}
case <-time.After(time.Second):
t.Fatal("Timed out waiting for message")
}
subCount := ps.SubscriberCount("test-ns")
if subCount != 0 {
t.Fatalf("Expected 0 receivers, got %d", subCount)
}
}
func TestPubSub_Publish(t *testing.T) {
ps := NewPubSub[string, string](10)
// Test publishing to a namespace with no subscribers
subs, receivers := ps.Publish("empty-ns", "hello")
if subs != 0 {
t.Fatalf("Expected 0 subscribers, got %d", subs)
}
if receivers != 0 {
t.Fatalf("Expected 0 receivers, got %d", receivers)
}
// Add a subscriber
ch, sub := ps.SubscribeByChan("test-ns", 1)
defer sub.Unsubscribe()
// Publish a message
subs, receivers = ps.Publish("test-ns", "hello")
if subs != 1 {
t.Fatalf("Expected 1 subscriber, got %d", subs)
}
if receivers != 1 {
t.Fatalf("Expected 1 receiver, got %d", receivers)
}
// Verify the message was received
select {
case msg := <-ch:
if msg != "hello" {
t.Fatalf("Expected to receive 'hello', got '%s'", msg)
}
case <-time.After(time.Second):
t.Fatal("Timed out waiting for message")
}
// Test non-blocking behavior with a full channel
// First fill the channel
ps.Publish("test-ns", "fill")
// Now publish again - this should not block but skip the receiver
subs, receivers = ps.Publish("test-ns", "overflow")
if subs != 1 {
t.Fatalf("Expected 1 subscriber, got %d", subs)
}
// The receiver count might be 0 if the channel is full
// Drain the channel
<-ch
}
func TestPubSub_PublishWithTimeout(t *testing.T) {
ps := NewPubSub[string, string](10)
// Add a subscriber with a channel
ch, sub := ps.SubscribeByChan("test-ns", 1)
defer sub.Unsubscribe()
// Publish with a timeout
subs, receivers := ps.PublishWithTimeout("test-ns", "hello", 100*time.Millisecond)
if subs != 1 {
t.Fatalf("Expected 1 subscriber, got %d", subs)
}
if receivers != 1 {
t.Fatalf("Expected 1 receiver, got %d", receivers)
}
// Verify the message was received
select {
case msg := <-ch:
if msg != "hello" {
t.Fatalf("Expected to receive 'hello', got '%s'", msg)
}
case <-time.After(time.Second):
t.Fatal("Timed out waiting for message")
}
// Fill the channel
ps.Publish("test-ns", "fill")
// Test timeout behavior with a full channel
start := time.Now()
subs, receivers = ps.PublishWithTimeout("test-ns", "timeout-test", 50*time.Millisecond)
elapsed := time.Since(start)
if subs != 1 {
t.Fatalf("Expected 1 subscriber, got %d", subs)
}
// The receiver count should be 0 if the timeout occurred
if elapsed < 50*time.Millisecond {
t.Fatalf("Expected to wait at least 50ms, only waited %v", elapsed)
}
// Drain the channel
<-ch
}
func TestPubSub_PublishWithContext(t *testing.T) {
ps := NewPubSub[string, string](10)
// Add a subscriber with a channel
ch, sub := ps.SubscribeByChan("test-ns", 1)
defer sub.Unsubscribe()
// Create a context
ctx, cancel := context.WithTimeout(context.Background(), 100*time.Millisecond)
defer cancel()
// Publish with context
subs, receivers, err := ps.PublishWithContext(ctx, "test-ns", "hello")
if err != nil {
t.Fatalf("Unexpected error: %v", err)
}
if subs != 1 {
t.Fatalf("Expected 1 subscriber, got %d", subs)
}
if receivers != 1 {
t.Fatalf("Expected 1 receiver, got %d", receivers)
}
// Verify the message was received
select {
case msg := <-ch:
if msg != "hello" {
t.Fatalf("Expected to receive 'hello', got '%s'", msg)
}
case <-time.After(time.Second):
t.Fatal("Timed out waiting for message")
}
// Fill the channel
ps.Publish("test-ns", "fill")
// Test context cancellation with a full channel
ctx, cancel = context.WithCancel(context.Background())
// Cancel the context after a short delay
go func() {
time.Sleep(50 * time.Millisecond)
cancel()
}()
start := time.Now()
subs, receivers, err = ps.PublishWithContext(ctx, "test-ns", "context-test")
elapsed := time.Since(start)
if subs != 1 {
t.Fatalf("Expected 1 subscriber, got %d", subs)
}
// Should get a context canceled error
if err == nil {
t.Fatal("Expected context canceled error, got nil")
}
if elapsed < 50*time.Millisecond {
t.Fatalf("Expected to wait at least 50ms, only waited %v", elapsed)
}
// Drain the channel
<-ch
}
func TestPubSub_Unsubscribe(t *testing.T) {
ps := NewPubSub[string, string](10)
// Add a subscriber
ch, sub := ps.SubscribeByChan("test-ns", 1)
// Publish a message
subs, receivers := ps.Publish("test-ns", "hello")
if subs != 1 {
t.Fatalf("Expected 1 subscriber, got %d", subs)
}
if receivers != 1 {
t.Fatalf("Expected 1 receiver, got %d", receivers)
}
// Verify the message was received
select {
case msg := <-ch:
if msg != "hello" {
t.Fatalf("Expected to receive 'hello', got '%s'", msg)
}
case <-time.After(time.Second):
t.Fatal("Timed out waiting for message")
}
// Unsubscribe
sub.Unsubscribe()
// Publish again
subs, receivers = ps.Publish("test-ns", "after-unsub")
if subs != 0 {
t.Fatalf("Expected 0 subscribers after unsubscribe, got %d", subs)
}
if receivers != 0 {
t.Fatalf("Expected 0 receivers after unsubscribe, got %d", receivers)
}
// Check that the namespace is removed
namespaces := ps.Namespaces()
if len(namespaces) != 0 {
t.Fatalf("Expected 0 namespaces after unsubscribe, got %d", len(namespaces))
}
}
func TestPubSub_MultipleSubscribers(t *testing.T) {
ps := NewPubSub[string, string](10)
// Add multiple subscribers
ch1, sub1 := ps.SubscribeByChan("test-ns", 1)
defer sub1.Unsubscribe()
ch2, sub2 := ps.SubscribeByChan("test-ns", 1)
defer sub2.Unsubscribe()
var received string
var wg sync.WaitGroup
wg.Add(1)
sub3 := ps.SubscribeByCallback("test-ns", func(msg string) {
received = msg
wg.Done()
})
defer sub3.Unsubscribe()
// Publish a message
subs, receivers := ps.Publish("test-ns", "hello")
if subs != 3 {
t.Fatalf("Expected 3 subscribers, got %d", subs)
}
if receivers != 3 {
t.Fatalf("Expected 3 receivers, got %d", receivers)
}
// Verify the message was received by all subscribers
select {
case msg := <-ch1:
if msg != "hello" {
t.Fatalf("Expected ch1 to receive 'hello', got '%s'", msg)
}
case <-time.After(time.Second):
t.Fatal("Timed out waiting for message on ch1")
}
select {
case msg := <-ch2:
if msg != "hello" {
t.Fatalf("Expected ch2 to receive 'hello', got '%s'", msg)
}
case <-time.After(time.Second):
t.Fatal("Timed out waiting for message on ch2")
}
// Wait for the callback
wg.Wait()
if received != "hello" {
t.Fatalf("Expected callback to receive 'hello', got '%s'", received)
}
}

144
dataext/ringBuffer.go Normal file
View File

@@ -0,0 +1,144 @@
package dataext
import "iter"
type RingBuffer[T any] struct {
items []T //
capacity int // max number of items the buffer can hold
size int // how many items are in the buffer
head int // ptr to next item
}
func NewRingBuffer[T any](capacity int) *RingBuffer[T] {
return &RingBuffer[T]{
items: make([]T, capacity),
capacity: capacity,
size: 0,
head: 0,
}
}
func (rb *RingBuffer[T]) Push(item T) {
if rb.size < rb.capacity {
rb.size++
}
rb.items[rb.head] = item
rb.head = (rb.head + 1) % rb.capacity
}
func (rb *RingBuffer[T]) PushPop(item T) *T {
if rb.size < rb.capacity {
rb.size++
rb.items[rb.head] = item
rb.head = (rb.head + 1) % rb.capacity
return nil
} else {
prev := rb.items[rb.head]
rb.items[rb.head] = item
rb.head = (rb.head + 1) % rb.capacity
return &prev
}
}
func (rb *RingBuffer[T]) Peek() (T, bool) {
if rb.size == 0 {
return *new(T), false
}
return rb.items[(rb.head-1+rb.capacity)%rb.capacity], true
}
func (rb *RingBuffer[T]) Items() []T {
if rb.size < rb.capacity {
return rb.items[:rb.size]
}
return append(rb.items[rb.head:], rb.items[:rb.head]...)
}
func (rb *RingBuffer[T]) Size() int {
return rb.size
}
func (rb *RingBuffer[T]) Capacity() int {
return rb.capacity
}
func (rb *RingBuffer[T]) Clear() {
rb.size = 0
rb.head = 0
}
func (rb *RingBuffer[T]) IsFull() bool {
return rb.size == rb.capacity
}
func (rb *RingBuffer[T]) At(i int) T {
if i < 0 || i >= rb.size {
panic("Index out of bounds")
}
if rb.size < rb.capacity {
return rb.items[i]
}
return rb.items[(rb.head+i)%rb.capacity]
}
func (rb *RingBuffer[T]) Get(i int) (T, bool) {
if i < 0 || i >= rb.size {
return *new(T), false
}
if rb.size < rb.capacity {
return rb.items[i], true
}
return rb.items[(rb.head+i)%rb.capacity], true
}
func (rb *RingBuffer[T]) Iter() iter.Seq[T] {
return func(yield func(T) bool) {
for i := 0; i < rb.size; i++ {
if !yield(rb.At(i)) {
return
}
}
}
}
func (rb *RingBuffer[T]) Iter2() iter.Seq2[int, T] {
return func(yield func(int, T) bool) {
for i := 0; i < rb.size; i++ {
if !yield(i, rb.At(i)) {
return
}
}
}
}
func (rb *RingBuffer[T]) Remove(fnEqual func(v T) bool) int {
// Mike [2024-11-13]: I *really* tried to write an in-place algorithm to remove elements
// But after carful consideration, I left that as an exercise for future readers
// It is, suprisingly, non-trivial, especially because the head-ptr must be weirdly updated
// And out At() method does not work correctly with {head<>0 && size<capacity}
dc := 0
b := make([]T, rb.capacity)
bsize := 0
for i := 0; i < rb.size; i++ {
comp := rb.At(i)
if fnEqual(comp) {
dc++
} else {
b[bsize] = comp
bsize++
}
}
if dc == 0 {
return 0
}
rb.items = b
rb.size = bsize
rb.head = bsize % rb.capacity
return dc
}

447
dataext/ringBuffer_test.go Normal file
View File

@@ -0,0 +1,447 @@
package dataext
import "testing"
func TestRingBufferPushAddsItem(t *testing.T) {
rb := NewRingBuffer[int](3)
rb.Push(1)
if rb.Size() != 1 {
t.Errorf("Expected size 1, got %d", rb.Size())
}
if item, _ := rb.Peek(); item != 1 {
t.Errorf("Expected item 1, got %d", item)
}
}
func TestRingBufferPushPopReturnsOldestItem(t *testing.T) {
rb := NewRingBuffer[int](3)
rb.Push(1)
rb.Push(2)
rb.Push(3)
if item := rb.PushPop(4); item == nil || *item != 1 {
t.Errorf("Expected item 1, got %v", item)
}
}
func TestRingBufferPeekReturnsLastPushedItem(t *testing.T) {
rb := NewRingBuffer[int](3)
rb.Push(1)
rb.Push(2)
if item, _ := rb.Peek(); item != 2 {
t.Errorf("Expected item 2, got %d", item)
}
}
func TestRingBufferOverflow1(t *testing.T) {
rb := NewRingBuffer[int](5)
rb.Push(1) // overriden
rb.Push(2) // overriden
rb.Push(3)
rb.Push(9)
rb.Push(4)
rb.Push(5)
rb.Push(7)
if rb.Size() != 5 {
t.Errorf("Expected size 4, got %d", rb.Size())
}
expected := []int{3, 9, 4, 5, 7}
items := rb.Items()
for i, item := range items {
if item != expected[i] {
t.Errorf("Expected item %d, got %d", expected[i], item)
}
}
}
func TestRingBufferItemsReturnsAllItems(t *testing.T) {
rb := NewRingBuffer[int](3)
rb.Push(1)
rb.Push(2)
rb.Push(3)
items := rb.Items()
expected := []int{1, 2, 3}
for i, item := range items {
if item != expected[i] {
t.Errorf("Expected item %d, got %d", expected[i], item)
}
}
}
func TestRingBufferClearEmptiesBuffer(t *testing.T) {
rb := NewRingBuffer[int](3)
rb.Push(1)
rb.Clear()
if rb.Size() != 0 {
t.Errorf("Expected size 0, got %d", rb.Size())
}
}
func TestRingBufferIsFullReturnsTrueWhenFull(t *testing.T) {
rb := NewRingBuffer[int](3)
rb.Push(1)
rb.Push(2)
rb.Push(3)
if !rb.IsFull() {
t.Errorf("Expected buffer to be full")
}
}
func TestRingBufferAtReturnsCorrectItem(t *testing.T) {
rb := NewRingBuffer[int](3)
rb.Push(1)
rb.Push(2)
rb.Push(3)
if item := rb.At(1); item != 2 {
t.Errorf("Expected item 2, got %d", item)
}
}
func TestRingBufferGetReturnsCorrectItem(t *testing.T) {
rb := NewRingBuffer[int](3)
rb.Push(1)
rb.Push(2)
rb.Push(3)
if item, ok := rb.Get(1); !ok || item != 2 {
t.Errorf("Expected item 2, got %d", item)
}
}
func TestRingBufferRemoveDeletesMatchingItems(t *testing.T) {
rb := NewRingBuffer[int](5)
rb.Push(1)
rb.Push(2)
rb.Push(3)
rb.Push(2)
rb.Push(4)
removed := rb.Remove(func(v int) bool { return v == 2 })
if removed != 2 {
t.Errorf("Expected 2 items removed, got %d", removed)
}
if rb.Size() != 3 {
t.Errorf("Expected size 3, got %d", rb.Size())
}
expected := []int{1, 3, 4}
items := rb.Items()
for i, item := range items {
if item != expected[i] {
t.Errorf("Expected item %d, got %d", expected[i], item)
}
}
}
func TestRingBufferRemoveDeletesMatchingItems2(t *testing.T) {
rb := NewRingBuffer[int](5)
rb.Push(1)
rb.Push(2)
rb.Push(3)
rb.Push(2)
rb.Push(4)
removed := rb.Remove(func(v int) bool { return v == 3 })
if removed != 1 {
t.Errorf("Expected 2 items removed, got %d", removed)
}
if rb.Size() != 4 {
t.Errorf("Expected size 3, got %d", rb.Size())
}
expected := []int{1, 2, 2, 4}
items := rb.Items()
for i, item := range items {
if item != expected[i] {
t.Errorf("Expected item %d, got %d", expected[i], item)
}
}
}
func TestRingBufferRemoveDeletesMatchingItems3(t *testing.T) {
rb := NewRingBuffer[int](5)
rb.Push(1)
rb.Push(2)
rb.Push(3)
rb.Push(9)
rb.Push(4)
removed := rb.Remove(func(v int) bool { return v == 3 })
if removed != 1 {
t.Errorf("Expected 2 items removed, got %d", removed)
}
if rb.Size() != 4 {
t.Errorf("Expected size 3, got %d", rb.Size())
}
expected := []int{1, 2, 9, 4}
items := rb.Items()
for i, item := range items {
if item != expected[i] {
t.Errorf("Expected item %d, got %d", expected[i], item)
}
}
}
func TestRingBufferRemoveDeletesMatchingItems4(t *testing.T) {
rb := NewRingBuffer[int](5)
rb.Push(1) // overriden
rb.Push(2) // overriden
rb.Push(3)
rb.Push(9)
rb.Push(4)
rb.Push(5)
rb.Push(7)
removed := rb.Remove(func(v int) bool { return v == 7 })
if removed != 1 {
t.Errorf("Expected 1 items removed, got %d", removed)
}
if rb.Size() != 4 {
t.Errorf("Expected size 4, got %d", rb.Size())
}
expected := []int{3, 9, 4, 5}
items := rb.Items()
for i, item := range items {
if item != expected[i] {
t.Errorf("Expected item %d, got %d", expected[i], item)
}
}
}
func TestRingBufferRemoveDeletesMatchingItems5(t *testing.T) {
rb := NewRingBuffer[int](5)
rb.Push(1) // overriden
rb.Push(2) // overriden
rb.Push(3)
rb.Push(9)
rb.Push(4)
rb.Push(5)
rb.Push(7)
removed := rb.Remove(func(v int) bool { return v == 3 })
if removed != 1 {
t.Errorf("Expected 1 items removed, got %d", removed)
}
if rb.Size() != 4 {
t.Errorf("Expected size 4, got %d", rb.Size())
}
expected := []int{9, 4, 5, 7}
items := rb.Items()
for i, item := range items {
if item != expected[i] {
t.Errorf("Expected item %d, got %d", expected[i], item)
}
}
}
func TestRingBufferRemoveDeletesMatchingItems6(t *testing.T) {
rb := NewRingBuffer[int](5)
rb.Push(1) // overriden
rb.Push(2) // overriden
rb.Push(3)
rb.Push(9)
rb.Push(4)
rb.Push(5)
rb.Push(7)
removed := rb.Remove(func(v int) bool { return v == 1 })
if removed != 0 {
t.Errorf("Expected 0 items removed, got %d", removed)
}
if rb.Size() != 5 {
t.Errorf("Expected size 5, got %d", rb.Size())
}
expected := []int{3, 9, 4, 5, 7}
items := rb.Items()
for i, item := range items {
if item != expected[i] {
t.Errorf("Expected item %d, got %d", expected[i], item)
}
}
if !rb.IsFull() {
t.Errorf("Expected buffer to not be full")
}
}
func TestRingBufferRemoveDeletesMatchingItems7(t *testing.T) {
rb := NewRingBuffer[int](5)
rb.Push(1) // overriden
rb.Push(2) // overriden
rb.Push(3)
rb.Push(9)
rb.Push(4)
rb.Push(5)
rb.Push(7)
removed := rb.Remove(func(v int) bool { return v == 9 })
if removed != 1 {
t.Errorf("Expected 1 items removed, got %d", removed)
}
if rb.Size() != 4 {
t.Errorf("Expected size 4, got %d", rb.Size())
}
expected := []int{3, 4, 5, 7}
items := rb.Items()
for i, item := range items {
if item != expected[i] {
t.Errorf("Expected item %d, got %d", expected[i], item)
}
}
if rb.IsFull() {
t.Errorf("Expected buffer to not be full")
}
}
func TestRingBufferAddItemsToFullRingBuffer(t *testing.T) {
rb := NewRingBuffer[int](3)
rb.Push(1)
rb.Push(2)
rb.Push(3)
rb.Push(4)
if rb.Size() != 3 {
t.Errorf("Expected size 3, got %d", rb.Size())
}
expected := []int{2, 3, 4}
items := rb.Items()
for i, item := range items {
if item != expected[i] {
t.Errorf("Expected item %d, got %d", expected[i], item)
}
}
}
func TestRingBufferAddItemsToNonFullRingBuffer(t *testing.T) {
rb := NewRingBuffer[int](3)
rb.Push(1)
rb.Push(2)
if rb.Size() != 2 {
t.Errorf("Expected size 2, got %d", rb.Size())
}
expected := []int{1, 2}
items := rb.Items()
for i, item := range items {
if item != expected[i] {
t.Errorf("Expected item %d, got %d", expected[i], item)
}
}
}
func TestRingBufferRemoveItemsFromNonFullRingBuffer(t *testing.T) {
rb := NewRingBuffer[int](3)
rb.Push(1)
rb.Push(2)
removed := rb.Remove(func(v int) bool { return v == 1 })
if removed != 1 {
t.Errorf("Expected 1 item removed, got %d", removed)
}
if rb.Size() != 1 {
t.Errorf("Expected size 1, got %d", rb.Size())
}
expected := []int{2}
items := rb.Items()
for i, item := range items {
if item != expected[i] {
t.Errorf("Expected item %d, got %d", expected[i], item)
}
}
}
func TestRingBufferRemoveItemsFromFullRingBuffer(t *testing.T) {
rb := NewRingBuffer[int](3)
rb.Push(1)
rb.Push(2)
rb.Push(3)
removed := rb.Remove(func(v int) bool { return v == 2 })
if removed != 1 {
t.Errorf("Expected 1 item removed, got %d", removed)
}
if rb.Size() != 2 {
t.Errorf("Expected size 2, got %d", rb.Size())
}
expected := []int{1, 3}
items := rb.Items()
for i, item := range items {
if item != expected[i] {
t.Errorf("Expected item %d, got %d", expected[i], item)
}
}
}
func TestRingBufferRemoveMultipleItemsFromRingBuffer(t *testing.T) {
rb := NewRingBuffer[int](5)
rb.Push(1)
rb.Push(2)
rb.Push(3)
rb.Push(2)
rb.Push(4)
removed := rb.Remove(func(v int) bool { return v == 2 })
if removed != 2 {
t.Errorf("Expected 2 items removed, got %d", removed)
}
if rb.Size() != 3 {
t.Errorf("Expected size 3, got %d", rb.Size())
}
expected := []int{1, 3, 4}
items := rb.Items()
for i, item := range items {
if item != expected[i] {
t.Errorf("Expected item %d, got %d", expected[i], item)
}
}
}
func TestRingBufferRemoveAllItemsFromRingBuffer(t *testing.T) {
rb := NewRingBuffer[int](3)
rb.Push(1)
rb.Push(2)
rb.Push(3)
removed := rb.Remove(func(v int) bool { return true })
if removed != 3 {
t.Errorf("Expected 3 items removed, got %d", removed)
}
if rb.Size() != 0 {
t.Errorf("Expected size 0, got %d", rb.Size())
}
}
func TestRingBufferRemoveNoItemsFromRingBuffer(t *testing.T) {
rb := NewRingBuffer[int](3)
rb.Push(1)
rb.Push(2)
rb.Push(3)
removed := rb.Remove(func(v int) bool { return false })
if removed != 0 {
t.Errorf("Expected 0 items removed, got %d", removed)
}
if rb.Size() != 3 {
t.Errorf("Expected size 3, got %d", rb.Size())
}
}
func TestRingBufferIteratesOverAllItems(t *testing.T) {
rb := NewRingBuffer[int](3)
rb.Push(1)
rb.Push(2)
rb.Push(3)
expected := []int{1, 2, 3}
i := 0
for item := range rb.Iter() {
if item != expected[i] {
t.Errorf("Expected item %d, got %d", expected[i], item)
}
i++
}
if i != len(expected) {
t.Errorf("Expected to iterate over %d items, but iterated over %d", len(expected), i)
}
}
func TestRingBufferIter2IteratesOverAllItemsWithIndices(t *testing.T) {
rb := NewRingBuffer[int](3)
rb.Push(1)
rb.Push(2)
rb.Push(3)
expected := []int{1, 2, 3}
i := 0
for index, item := range rb.Iter2() {
if index != i {
t.Errorf("Expected index %d, got %d", i, index)
}
if item != expected[i] {
t.Errorf("Expected item %d, got %d", expected[i], item)
}
i++
}
if i != len(expected) {
t.Errorf("Expected to iterate over %d items, but iterated over %d", len(expected), i)
}
}

116
dataext/stack.go Normal file
View File

@@ -0,0 +1,116 @@
package dataext
import (
"errors"
"git.blackforestbytes.com/BlackForestBytes/goext/langext"
"sync"
)
var ErrEmptyStack = errors.New("stack is empty")
type Stack[T any] struct {
lock *sync.Mutex
data []T
}
func NewStack[T any](threadsafe bool, initialCapacity int) *Stack[T] {
var lck *sync.Mutex = nil
if threadsafe {
lck = &sync.Mutex{}
}
return &Stack[T]{
lock: lck,
data: make([]T, 0, initialCapacity),
}
}
func (s *Stack[T]) Push(v T) {
if s.lock != nil {
s.lock.Lock()
defer s.lock.Unlock()
}
s.data = append(s.data, v)
}
func (s *Stack[T]) Pop() (T, error) {
if s.lock != nil {
s.lock.Lock()
defer s.lock.Unlock()
}
l := len(s.data)
if l == 0 {
return *new(T), ErrEmptyStack
}
result := s.data[l-1]
s.data = s.data[:l-1]
return result, nil
}
func (s *Stack[T]) OptPop() *T {
if s.lock != nil {
s.lock.Lock()
defer s.lock.Unlock()
}
l := len(s.data)
if l == 0 {
return nil
}
result := s.data[l-1]
s.data = s.data[:l-1]
return langext.Ptr(result)
}
func (s *Stack[T]) Peek() (T, error) {
if s.lock != nil {
s.lock.Lock()
defer s.lock.Unlock()
}
l := len(s.data)
if l == 0 {
return *new(T), ErrEmptyStack
}
return s.data[l-1], nil
}
func (s *Stack[T]) OptPeek() *T {
if s.lock != nil {
s.lock.Lock()
defer s.lock.Unlock()
}
l := len(s.data)
if l == 0 {
return nil
}
return langext.Ptr(s.data[l-1])
}
func (s *Stack[T]) Length() int {
if s.lock != nil {
s.lock.Lock()
defer s.lock.Unlock()
}
return len(s.data)
}
func (s *Stack[T]) Empty() bool {
if s.lock != nil {
s.lock.Lock()
defer s.lock.Unlock()
}
return len(s.data) == 0
}

View File

@@ -6,7 +6,7 @@ import (
"encoding/binary" "encoding/binary"
"errors" "errors"
"fmt" "fmt"
"gogs.mikescher.com/BlackForestBytes/goext/langext" "git.blackforestbytes.com/BlackForestBytes/goext/langext"
"hash" "hash"
"io" "io"
"reflect" "reflect"

View File

@@ -1,8 +1,8 @@
package dataext package dataext
import ( import (
"encoding/hex" "git.blackforestbytes.com/BlackForestBytes/goext/langext"
"gogs.mikescher.com/BlackForestBytes/goext/langext" "git.blackforestbytes.com/BlackForestBytes/goext/tst"
"testing" "testing"
) )
@@ -18,14 +18,14 @@ func noErrStructHash(t *testing.T, dat any, opt ...StructHashOptions) []byte {
func TestStructHashSimple(t *testing.T) { func TestStructHashSimple(t *testing.T) {
assertEqual(t, "209bf774af36cc3a045c152d9f1269ef3684ad819c1359ee73ff0283a308fefa", noErrStructHash(t, "Hello")) tst.AssertHexEqual(t, "209bf774af36cc3a045c152d9f1269ef3684ad819c1359ee73ff0283a308fefa", noErrStructHash(t, "Hello"))
assertEqual(t, "c32f3626b981ae2997db656f3acad3f1dc9d30ef6b6d14296c023e391b25f71a", noErrStructHash(t, 0)) tst.AssertHexEqual(t, "c32f3626b981ae2997db656f3acad3f1dc9d30ef6b6d14296c023e391b25f71a", noErrStructHash(t, 0))
assertEqual(t, "01b781b03e9586b257d387057dfc70d9f06051e7d3c1e709a57e13cc8daf3e35", noErrStructHash(t, []byte{})) tst.AssertHexEqual(t, "01b781b03e9586b257d387057dfc70d9f06051e7d3c1e709a57e13cc8daf3e35", noErrStructHash(t, []byte{}))
assertEqual(t, "93e1dcd45c732fe0079b0fb3204c7c803f0921835f6bfee2e6ff263e73eed53c", noErrStructHash(t, []int{})) tst.AssertHexEqual(t, "93e1dcd45c732fe0079b0fb3204c7c803f0921835f6bfee2e6ff263e73eed53c", noErrStructHash(t, []int{}))
assertEqual(t, "54f637a376aad55b3160d98ebbcae8099b70d91b9400df23fb3709855d59800a", noErrStructHash(t, []int{1, 2, 3})) tst.AssertHexEqual(t, "54f637a376aad55b3160d98ebbcae8099b70d91b9400df23fb3709855d59800a", noErrStructHash(t, []int{1, 2, 3}))
assertEqual(t, "e3b0c44298fc1c149afbf4c8996fb92427ae41e4649b934ca495991b7852b855", noErrStructHash(t, nil)) tst.AssertHexEqual(t, "e3b0c44298fc1c149afbf4c8996fb92427ae41e4649b934ca495991b7852b855", noErrStructHash(t, nil))
assertEqual(t, "349a7db91aa78fd30bbaa7c7f9c7bfb2fcfe72869b4861162a96713a852f60d3", noErrStructHash(t, []any{1, "", nil})) tst.AssertHexEqual(t, "349a7db91aa78fd30bbaa7c7f9c7bfb2fcfe72869b4861162a96713a852f60d3", noErrStructHash(t, []any{1, "", nil}))
assertEqual(t, "ca51aab87808bf0062a4a024de6aac0c2bad54275cc857a4944569f89fd245ad", noErrStructHash(t, struct{}{})) tst.AssertHexEqual(t, "ca51aab87808bf0062a4a024de6aac0c2bad54275cc857a4944569f89fd245ad", noErrStructHash(t, struct{}{}))
} }
@@ -37,13 +37,13 @@ func TestStructHashSimpleStruct(t *testing.T) {
F3 *int F3 *int
} }
assertEqual(t, "a90bff751c70c738bb5cfc9b108e783fa9c19c0bc9273458e0aaee6e74aa1b92", noErrStructHash(t, t0{ tst.AssertHexEqual(t, "a90bff751c70c738bb5cfc9b108e783fa9c19c0bc9273458e0aaee6e74aa1b92", noErrStructHash(t, t0{
F1: 10, F1: 10,
F2: []string{"1", "2", "3"}, F2: []string{"1", "2", "3"},
F3: nil, F3: nil,
})) }))
assertEqual(t, "5d09090dc34ac59dd645f197a255f653387723de3afa1b614721ea5a081c675f", noErrStructHash(t, t0{ tst.AssertHexEqual(t, "5d09090dc34ac59dd645f197a255f653387723de3afa1b614721ea5a081c675f", noErrStructHash(t, t0{
F1: 10, F1: 10,
F2: []string{"1", "2", "3"}, F2: []string{"1", "2", "3"},
F3: langext.Ptr(99), F3: langext.Ptr(99),
@@ -64,7 +64,7 @@ func TestStructHashLayeredStruct(t *testing.T) {
SV3 t1_1 SV3 t1_1
} }
assertEqual(t, "fd4ca071fb40a288fee4b7a3dfdaab577b30cb8f80f81ec511e7afd72dc3b469", noErrStructHash(t, t1_2{ tst.AssertHexEqual(t, "fd4ca071fb40a288fee4b7a3dfdaab577b30cb8f80f81ec511e7afd72dc3b469", noErrStructHash(t, t1_2{
SV1: nil, SV1: nil,
SV2: nil, SV2: nil,
SV3: t1_1{ SV3: t1_1{
@@ -73,7 +73,7 @@ func TestStructHashLayeredStruct(t *testing.T) {
F15: false, F15: false,
}, },
})) }))
assertEqual(t, "3fbf7c67d8121deda075cc86319a4e32d71744feb2cebf89b43bc682f072a029", noErrStructHash(t, t1_2{ tst.AssertHexEqual(t, "3fbf7c67d8121deda075cc86319a4e32d71744feb2cebf89b43bc682f072a029", noErrStructHash(t, t1_2{
SV1: nil, SV1: nil,
SV2: &t1_1{}, SV2: &t1_1{},
SV3: t1_1{ SV3: t1_1{
@@ -82,7 +82,7 @@ func TestStructHashLayeredStruct(t *testing.T) {
F15: true, F15: true,
}, },
})) }))
assertEqual(t, "b1791ccd1b346c3ede5bbffda85555adcd8216b93ffca23f14fe175ec47c5104", noErrStructHash(t, t1_2{ tst.AssertHexEqual(t, "b1791ccd1b346c3ede5bbffda85555adcd8216b93ffca23f14fe175ec47c5104", noErrStructHash(t, t1_2{
SV1: &t1_1{}, SV1: &t1_1{},
SV2: &t1_1{}, SV2: &t1_1{},
SV3: t1_1{ SV3: t1_1{
@@ -101,7 +101,7 @@ func TestStructHashMap(t *testing.T) {
F2 map[string]int F2 map[string]int
} }
assertEqual(t, "d50c53ad1fafb448c33fddd5aca01a86a2edf669ce2ecab07ba6fe877951d824", noErrStructHash(t, t0{ tst.AssertHexEqual(t, "d50c53ad1fafb448c33fddd5aca01a86a2edf669ce2ecab07ba6fe877951d824", noErrStructHash(t, t0{
F1: 10, F1: 10,
F2: map[string]int{ F2: map[string]int{
"x": 1, "x": 1,
@@ -110,7 +110,7 @@ func TestStructHashMap(t *testing.T) {
}, },
})) }))
assertEqual(t, "d50c53ad1fafb448c33fddd5aca01a86a2edf669ce2ecab07ba6fe877951d824", noErrStructHash(t, t0{ tst.AssertHexEqual(t, "d50c53ad1fafb448c33fddd5aca01a86a2edf669ce2ecab07ba6fe877951d824", noErrStructHash(t, t0{
F1: 10, F1: 10,
F2: map[string]int{ F2: map[string]int{
"a": 99, "a": 99,
@@ -128,16 +128,9 @@ func TestStructHashMap(t *testing.T) {
m3["x"] = 1 m3["x"] = 1
m3["a"] = 2 m3["a"] = 2
assertEqual(t, "d50c53ad1fafb448c33fddd5aca01a86a2edf669ce2ecab07ba6fe877951d824", noErrStructHash(t, t0{ tst.AssertHexEqual(t, "d50c53ad1fafb448c33fddd5aca01a86a2edf669ce2ecab07ba6fe877951d824", noErrStructHash(t, t0{
F1: 10, F1: 10,
F2: m3, F2: m3,
})) }))
} }
func assertEqual(t *testing.T, expected string, actual []byte) {
actualStr := hex.EncodeToString(actual)
if actualStr != expected {
t.Errorf("values differ: Actual: '%v', Expected: '%v'", actualStr, expected)
}
}

204
dataext/syncMap.go Normal file
View File

@@ -0,0 +1,204 @@
package dataext
import "sync"
type SyncMap[TKey comparable, TData any] struct {
data map[TKey]TData
lock sync.Mutex
}
func NewSyncMap[TKey comparable, TData any]() *SyncMap[TKey, TData] {
return &SyncMap[TKey, TData]{data: make(map[TKey]TData), lock: sync.Mutex{}}
}
func (s *SyncMap[TKey, TData]) Set(key TKey, data TData) {
s.lock.Lock()
defer s.lock.Unlock()
if s.data == nil {
s.data = make(map[TKey]TData)
}
s.data[key] = data
}
func (s *SyncMap[TKey, TData]) SetIfNotContains(key TKey, data TData) bool {
s.lock.Lock()
defer s.lock.Unlock()
if s.data == nil {
s.data = make(map[TKey]TData)
}
if _, existsInPreState := s.data[key]; existsInPreState {
return false
}
s.data[key] = data
return true
}
func (s *SyncMap[TKey, TData]) SetIfNotContainsFunc(key TKey, data func() TData) bool {
s.lock.Lock()
defer s.lock.Unlock()
if s.data == nil {
s.data = make(map[TKey]TData)
}
if _, existsInPreState := s.data[key]; existsInPreState {
return false
}
s.data[key] = data()
return true
}
func (s *SyncMap[TKey, TData]) Get(key TKey) (TData, bool) {
s.lock.Lock()
defer s.lock.Unlock()
if s.data == nil {
s.data = make(map[TKey]TData)
}
if v, ok := s.data[key]; ok {
return v, true
} else {
return *new(TData), false
}
}
func (s *SyncMap[TKey, TData]) GetAndSetIfNotContains(key TKey, data TData) TData {
s.lock.Lock()
defer s.lock.Unlock()
if s.data == nil {
s.data = make(map[TKey]TData)
}
if v, ok := s.data[key]; ok {
return v
} else {
s.data[key] = data
return data
}
}
func (s *SyncMap[TKey, TData]) GetAndSetIfNotContainsFunc(key TKey, data func() TData) TData {
s.lock.Lock()
defer s.lock.Unlock()
if s.data == nil {
s.data = make(map[TKey]TData)
}
if v, ok := s.data[key]; ok {
return v
} else {
dataObj := data()
s.data[key] = dataObj
return dataObj
}
}
func (s *SyncMap[TKey, TData]) Delete(key TKey) bool {
s.lock.Lock()
defer s.lock.Unlock()
if s.data == nil {
s.data = make(map[TKey]TData)
}
_, ok := s.data[key]
delete(s.data, key)
return ok
}
func (s *SyncMap[TKey, TData]) DeleteIf(fn func(key TKey, data TData) bool) int {
s.lock.Lock()
defer s.lock.Unlock()
if s.data == nil {
s.data = make(map[TKey]TData)
}
rm := 0
for k, v := range s.data {
if fn(k, v) {
delete(s.data, k)
rm++
}
}
return rm
}
func (s *SyncMap[TKey, TData]) Clear() {
s.lock.Lock()
defer s.lock.Unlock()
s.data = make(map[TKey]TData)
}
func (s *SyncMap[TKey, TData]) Contains(key TKey) bool {
s.lock.Lock()
defer s.lock.Unlock()
if s.data == nil {
s.data = make(map[TKey]TData)
}
_, ok := s.data[key]
return ok
}
func (s *SyncMap[TKey, TData]) GetAllKeys() []TKey {
s.lock.Lock()
defer s.lock.Unlock()
if s.data == nil {
s.data = make(map[TKey]TData)
}
r := make([]TKey, 0, len(s.data))
for k := range s.data {
r = append(r, k)
}
return r
}
func (s *SyncMap[TKey, TData]) GetAllValues() []TData {
s.lock.Lock()
defer s.lock.Unlock()
if s.data == nil {
s.data = make(map[TKey]TData)
}
r := make([]TData, 0, len(s.data))
for _, v := range s.data {
r = append(r, v)
}
return r
}
func (s *SyncMap[TKey, TData]) Count() int {
s.lock.Lock()
defer s.lock.Unlock()
if s.data == nil {
s.data = make(map[TKey]TData)
}
return len(s.data)
}

143
dataext/syncRingSet.go Normal file
View File

@@ -0,0 +1,143 @@
package dataext
import "sync"
type SyncRingSet[TData comparable] struct {
data map[TData]bool
lock sync.Mutex
ring *RingBuffer[TData]
}
func NewSyncRingSet[TData comparable](capacity int) *SyncRingSet[TData] {
return &SyncRingSet[TData]{
data: make(map[TData]bool, capacity+1),
lock: sync.Mutex{},
ring: NewRingBuffer[TData](capacity),
}
}
// Add adds `value` to the set
// returns true if the value was actually inserted (value did not exist beforehand)
// returns false if the value already existed
func (s *SyncRingSet[TData]) Add(value TData) bool {
s.lock.Lock()
defer s.lock.Unlock()
if s.data == nil {
s.data = make(map[TData]bool)
}
_, existsInPreState := s.data[value]
if existsInPreState {
return false
}
prev := s.ring.PushPop(value)
s.data[value] = true
if prev != nil {
delete(s.data, *prev)
}
return true
}
func (s *SyncRingSet[TData]) AddAll(values []TData) {
s.lock.Lock()
defer s.lock.Unlock()
if s.data == nil {
s.data = make(map[TData]bool)
}
for _, value := range values {
_, existsInPreState := s.data[value]
if existsInPreState {
continue
}
prev := s.ring.PushPop(value)
s.data[value] = true
if prev != nil {
delete(s.data, *prev)
}
}
}
func (s *SyncRingSet[TData]) Remove(value TData) bool {
s.lock.Lock()
defer s.lock.Unlock()
if s.data == nil {
s.data = make(map[TData]bool)
}
_, existsInPreState := s.data[value]
if !existsInPreState {
return false
}
delete(s.data, value)
s.ring.Remove(func(v TData) bool { return value == v })
return true
}
func (s *SyncRingSet[TData]) RemoveAll(values []TData) {
s.lock.Lock()
defer s.lock.Unlock()
if s.data == nil {
s.data = make(map[TData]bool)
}
for _, value := range values {
delete(s.data, value)
s.ring.Remove(func(v TData) bool { return value == v })
}
}
func (s *SyncRingSet[TData]) Contains(value TData) bool {
s.lock.Lock()
defer s.lock.Unlock()
if s.data == nil {
s.data = make(map[TData]bool)
}
_, ok := s.data[value]
return ok
}
func (s *SyncRingSet[TData]) Get() []TData {
s.lock.Lock()
defer s.lock.Unlock()
if s.data == nil {
s.data = make(map[TData]bool)
}
r := make([]TData, 0, len(s.data))
for k := range s.data {
r = append(r, k)
}
return r
}
// AddIfNotContains
// returns true if the value was actually added (value did not exist beforehand)
// returns false if the value already existed
func (s *SyncRingSet[TData]) AddIfNotContains(key TData) bool {
return s.Add(key)
}
// RemoveIfContains
// returns true if the value was actually removed (value did exist beforehand)
// returns false if the value did not exist in the set
func (s *SyncRingSet[TData]) RemoveIfContains(key TData) bool {
return s.Remove(key)
}

View File

@@ -7,6 +7,13 @@ type SyncSet[TData comparable] struct {
lock sync.Mutex lock sync.Mutex
} }
func NewSyncSet[TData comparable]() *SyncSet[TData] {
return &SyncSet[TData]{data: make(map[TData]bool), lock: sync.Mutex{}}
}
// Add adds `value` to the set
// returns true if the value was actually inserted (value did not exist beforehand)
// returns false if the value already existed
func (s *SyncSet[TData]) Add(value TData) bool { func (s *SyncSet[TData]) Add(value TData) bool {
s.lock.Lock() s.lock.Lock()
defer s.lock.Unlock() defer s.lock.Unlock()
@@ -15,10 +22,13 @@ func (s *SyncSet[TData]) Add(value TData) bool {
s.data = make(map[TData]bool) s.data = make(map[TData]bool)
} }
_, ok := s.data[value] _, existsInPreState := s.data[value]
s.data[value] = true if existsInPreState {
return false
}
return !ok s.data[value] = true
return true
} }
func (s *SyncSet[TData]) AddAll(values []TData) { func (s *SyncSet[TData]) AddAll(values []TData) {
@@ -34,6 +44,36 @@ func (s *SyncSet[TData]) AddAll(values []TData) {
} }
} }
func (s *SyncSet[TData]) Remove(value TData) bool {
s.lock.Lock()
defer s.lock.Unlock()
if s.data == nil {
s.data = make(map[TData]bool)
}
_, existsInPreState := s.data[value]
if !existsInPreState {
return false
}
delete(s.data, value)
return true
}
func (s *SyncSet[TData]) RemoveAll(values []TData) {
s.lock.Lock()
defer s.lock.Unlock()
if s.data == nil {
s.data = make(map[TData]bool)
}
for _, value := range values {
delete(s.data, value)
}
}
func (s *SyncSet[TData]) Contains(value TData) bool { func (s *SyncSet[TData]) Contains(value TData) bool {
s.lock.Lock() s.lock.Lock()
defer s.lock.Unlock() defer s.lock.Unlock()
@@ -63,3 +103,17 @@ func (s *SyncSet[TData]) Get() []TData {
return r return r
} }
// AddIfNotContains
// returns true if the value was actually added (value did not exist beforehand)
// returns false if the value already existed
func (s *SyncSet[TData]) AddIfNotContains(key TData) bool {
return s.Add(key)
}
// RemoveIfContains
// returns true if the value was actually removed (value did exist beforehand)
// returns false if the value did not exist in the set
func (s *SyncSet[TData]) RemoveIfContains(key TData) bool {
return s.Remove(key)
}

241
dataext/tuple.go Normal file
View File

@@ -0,0 +1,241 @@
package dataext
type ValueGroup interface {
TupleLength() int
TupleValues() []any
}
// ----------------------------------------------------------------------------
type Single[T1 any] struct {
V1 T1
}
func (s Single[T1]) TupleLength() int {
return 1
}
func (s Single[T1]) TupleValues() []any {
return []any{s.V1}
}
func NewSingle[T1 any](v1 T1) Single[T1] {
return Single[T1]{V1: v1}
}
func NewTuple1[T1 any](v1 T1) Single[T1] {
return Single[T1]{V1: v1}
}
// ----------------------------------------------------------------------------
type Tuple[T1 any, T2 any] struct {
V1 T1
V2 T2
}
func (t Tuple[T1, T2]) TupleLength() int {
return 2
}
func (t Tuple[T1, T2]) TupleValues() []any {
return []any{t.V1, t.V2}
}
func NewTuple[T1 any, T2 any](v1 T1, v2 T2) Tuple[T1, T2] {
return Tuple[T1, T2]{V1: v1, V2: v2}
}
func NewTuple2[T1 any, T2 any](v1 T1, v2 T2) Tuple[T1, T2] {
return Tuple[T1, T2]{V1: v1, V2: v2}
}
// ----------------------------------------------------------------------------
type Triple[T1 any, T2 any, T3 any] struct {
V1 T1
V2 T2
V3 T3
}
func (t Triple[T1, T2, T3]) TupleLength() int {
return 3
}
func (t Triple[T1, T2, T3]) TupleValues() []any {
return []any{t.V1, t.V2, t.V3}
}
func NewTriple[T1 any, T2 any, T3 any](v1 T1, v2 T2, v3 T3) Triple[T1, T2, T3] {
return Triple[T1, T2, T3]{V1: v1, V2: v2, V3: v3}
}
func NewTuple3[T1 any, T2 any, T3 any](v1 T1, v2 T2, v3 T3) Triple[T1, T2, T3] {
return Triple[T1, T2, T3]{V1: v1, V2: v2, V3: v3}
}
// ----------------------------------------------------------------------------
type Quadruple[T1 any, T2 any, T3 any, T4 any] struct {
V1 T1
V2 T2
V3 T3
V4 T4
}
func (t Quadruple[T1, T2, T3, T4]) TupleLength() int {
return 4
}
func (t Quadruple[T1, T2, T3, T4]) TupleValues() []any {
return []any{t.V1, t.V2, t.V3, t.V4}
}
func NewQuadruple[T1 any, T2 any, T3 any, T4 any](v1 T1, v2 T2, v3 T3, v4 T4) Quadruple[T1, T2, T3, T4] {
return Quadruple[T1, T2, T3, T4]{V1: v1, V2: v2, V3: v3, V4: v4}
}
func NewTuple4[T1 any, T2 any, T3 any, T4 any](v1 T1, v2 T2, v3 T3, v4 T4) Quadruple[T1, T2, T3, T4] {
return Quadruple[T1, T2, T3, T4]{V1: v1, V2: v2, V3: v3, V4: v4}
}
// ----------------------------------------------------------------------------
type Quintuple[T1 any, T2 any, T3 any, T4 any, T5 any] struct {
V1 T1
V2 T2
V3 T3
V4 T4
V5 T5
}
func (t Quintuple[T1, T2, T3, T4, T5]) TupleLength() int {
return 5
}
func (t Quintuple[T1, T2, T3, T4, T5]) TupleValues() []any {
return []any{t.V1, t.V2, t.V3, t.V4, t.V5}
}
func NewQuintuple[T1 any, T2 any, T3 any, T4 any, T5 any](v1 T1, v2 T2, v3 T3, v4 T4, v5 T5) Quintuple[T1, T2, T3, T4, T5] {
return Quintuple[T1, T2, T3, T4, T5]{V1: v1, V2: v2, V3: v3, V4: v4, V5: v5}
}
func NewTuple5[T1 any, T2 any, T3 any, T4 any, T5 any](v1 T1, v2 T2, v3 T3, v4 T4, v5 T5) Quintuple[T1, T2, T3, T4, T5] {
return Quintuple[T1, T2, T3, T4, T5]{V1: v1, V2: v2, V3: v3, V4: v4, V5: v5}
}
// ----------------------------------------------------------------------------
type Sextuple[T1 any, T2 any, T3 any, T4 any, T5 any, T6 any] struct {
V1 T1
V2 T2
V3 T3
V4 T4
V5 T5
V6 T6
}
func (t Sextuple[T1, T2, T3, T4, T5, T6]) TupleLength() int {
return 6
}
func (t Sextuple[T1, T2, T3, T4, T5, T6]) TupleValues() []any {
return []any{t.V1, t.V2, t.V3, t.V4, t.V5, t.V6}
}
func NewSextuple[T1 any, T2 any, T3 any, T4 any, T5 any, T6 any](v1 T1, v2 T2, v3 T3, v4 T4, v5 T5, v6 T6) Sextuple[T1, T2, T3, T4, T5, T6] {
return Sextuple[T1, T2, T3, T4, T5, T6]{V1: v1, V2: v2, V3: v3, V4: v4, V5: v5, V6: v6}
}
func NewTuple6[T1 any, T2 any, T3 any, T4 any, T5 any, T6 any](v1 T1, v2 T2, v3 T3, v4 T4, v5 T5, v6 T6) Sextuple[T1, T2, T3, T4, T5, T6] {
return Sextuple[T1, T2, T3, T4, T5, T6]{V1: v1, V2: v2, V3: v3, V4: v4, V5: v5, V6: v6}
}
// ----------------------------------------------------------------------------
type Septuple[T1 any, T2 any, T3 any, T4 any, T5 any, T6 any, T7 any] struct {
V1 T1
V2 T2
V3 T3
V4 T4
V5 T5
V6 T6
V7 T7
}
func (t Septuple[T1, T2, T3, T4, T5, T6, T7]) TupleLength() int {
return 7
}
func (t Septuple[T1, T2, T3, T4, T5, T6, T7]) TupleValues() []any {
return []any{t.V1, t.V2, t.V3, t.V4, t.V5, t.V6, t.V7}
}
func NewSeptuple[T1 any, T2 any, T3 any, T4 any, T5 any, T6 any, T7 any](v1 T1, v2 T2, v3 T3, v4 T4, v5 T5, v6 T6, v7 T7) Septuple[T1, T2, T3, T4, T5, T6, T7] {
return Septuple[T1, T2, T3, T4, T5, T6, T7]{V1: v1, V2: v2, V3: v3, V4: v4, V5: v5, V6: v6, V7: v7}
}
func NewTuple7[T1 any, T2 any, T3 any, T4 any, T5 any, T6 any, T7 any](v1 T1, v2 T2, v3 T3, v4 T4, v5 T5, v6 T6, v7 T7) Septuple[T1, T2, T3, T4, T5, T6, T7] {
return Septuple[T1, T2, T3, T4, T5, T6, T7]{V1: v1, V2: v2, V3: v3, V4: v4, V5: v5, V6: v6, V7: v7}
}
// ----------------------------------------------------------------------------
type Octuple[T1 any, T2 any, T3 any, T4 any, T5 any, T6 any, T7 any, T8 any] struct {
V1 T1
V2 T2
V3 T3
V4 T4
V5 T5
V6 T6
V7 T7
V8 T8
}
func (t Octuple[T1, T2, T3, T4, T5, T6, T7, T8]) TupleLength() int {
return 8
}
func (t Octuple[T1, T2, T3, T4, T5, T6, T7, T8]) TupleValues() []any {
return []any{t.V1, t.V2, t.V3, t.V4, t.V5, t.V6, t.V7, t.V8}
}
func NewOctuple[T1 any, T2 any, T3 any, T4 any, T5 any, T6 any, T7 any, T8 any](v1 T1, v2 T2, v3 T3, v4 T4, v5 T5, v6 T6, v7 T7, v8 T8) Octuple[T1, T2, T3, T4, T5, T6, T7, T8] {
return Octuple[T1, T2, T3, T4, T5, T6, T7, T8]{V1: v1, V2: v2, V3: v3, V4: v4, V5: v5, V6: v6, V7: v7, V8: v8}
}
func NewTuple8[T1 any, T2 any, T3 any, T4 any, T5 any, T6 any, T7 any, T8 any](v1 T1, v2 T2, v3 T3, v4 T4, v5 T5, v6 T6, v7 T7, v8 T8) Octuple[T1, T2, T3, T4, T5, T6, T7, T8] {
return Octuple[T1, T2, T3, T4, T5, T6, T7, T8]{V1: v1, V2: v2, V3: v3, V4: v4, V5: v5, V6: v6, V7: v7, V8: v8}
}
// ----------------------------------------------------------------------------
type Nonuple[T1 any, T2 any, T3 any, T4 any, T5 any, T6 any, T7 any, T8 any, T9 any] struct {
V1 T1
V2 T2
V3 T3
V4 T4
V5 T5
V6 T6
V7 T7
V8 T8
V9 T9
}
func (t Nonuple[T1, T2, T3, T4, T5, T6, T7, T8, T9]) TupleLength() int {
return 9
}
func (t Nonuple[T1, T2, T3, T4, T5, T6, T7, T8, T9]) TupleValues() []any {
return []any{t.V1, t.V2, t.V3, t.V4, t.V5, t.V6, t.V7, t.V8, t.V9}
}
func NewNonuple[T1 any, T2 any, T3 any, T4 any, T5 any, T6 any, T7 any, T8 any, T9 any](v1 T1, v2 T2, v3 T3, v4 T4, v5 T5, v6 T6, v7 T7, v8 T8, v9 T9) Nonuple[T1, T2, T3, T4, T5, T6, T7, T8, T9] {
return Nonuple[T1, T2, T3, T4, T5, T6, T7, T8, T9]{V1: v1, V2: v2, V3: v3, V4: v4, V5: v5, V6: v6, V7: v7, V8: v8, V9: v9}
}
func NewTuple9[T1 any, T2 any, T3 any, T4 any, T5 any, T6 any, T7 any, T8 any, T9 any](v1 T1, v2 T2, v3 T3, v4 T4, v5 T5, v6 T6, v7 T7, v8 T8, v9 T9) Nonuple[T1, T2, T3, T4, T5, T6, T7, T8, T9] {
return Nonuple[T1, T2, T3, T4, T5, T6, T7, T8, T9]{V1: v1, V2: v2, V3: v3, V4: v4, V5: v5, V6: v6, V7: v7, V8: v8, V9: v9}
}

33
enums/enum.go Normal file
View File

@@ -0,0 +1,33 @@
package enums
type Enum interface {
Valid() bool
ValuesAny() []any
ValuesMeta() []EnumMetaValue
VarName() string
TypeName() string
PackageName() string
}
type StringEnum interface {
Enum
String() string
}
type DescriptionEnum interface {
Enum
Description() string
DescriptionMeta() EnumDescriptionMetaValue
}
type EnumMetaValue struct {
VarName string `json:"varName"`
Value Enum `json:"value"`
Description *string `json:"description"`
}
type EnumDescriptionMetaValue struct {
VarName string `json:"varName"`
Value Enum `json:"value"`
Description string `json:"description"`
}

536
exerr/builder.go Normal file
View File

@@ -0,0 +1,536 @@
package exerr
import (
"bytes"
"context"
"encoding/json"
"fmt"
"git.blackforestbytes.com/BlackForestBytes/goext/dataext"
"git.blackforestbytes.com/BlackForestBytes/goext/enums"
"git.blackforestbytes.com/BlackForestBytes/goext/langext"
"github.com/gin-gonic/gin"
"github.com/rs/zerolog"
"go.mongodb.org/mongo-driver/bson/primitive"
"net/http"
"os"
"runtime/debug"
"strings"
"time"
)
//
// ==== USAGE =====
//
// If some method returns an error _always wrap it into an exerror:
// value, err := do_something(..)
// if err != nil {
// return nil, exerror.Wrap(err, "do something failed").Build()
// }
//
// If possible add metadata to the error (eg the id that was not found, ...), the methods are the same as in zerolog
// return nil, exerror.Wrap(err, "do something failed").Str("someid", id).Int("count", in.Count).Build()
//
// You can also add extra-data to an error with Extra(..)
// in contrast to metadata is extradata always printed in the resulting error and is more intended for additional (programmatically readable) data in addition to the errortype
// (metadata is more internal debug info/help)
//
// You can change the errortype with `.User()` and `.System()` (User-errors are 400 and System-errors 500)
// You can also manually set the statuscode with `.WithStatuscode(http.NotFound)`
// You can set the type with `WithType(..)`
//
// New Errors (that don't wrap an existing err object) are created with New
// return nil, exerror.New(exerror.TypeInternal, "womethign wen horrible wrong").Build()
// You can eitehr use an existing ErrorType, the "catch-all" ErrInternal, or add you own ErrType in consts.go
//
// All errors should be handled one of the following four ways:
// - return the error to the caller and let him handle it:
// (also auto-prints the error to the log)
// => Wrap/New + Build
// - Print the error
// (also auto-sends it to the error-service)
// This is useful for errors that happen asynchron or are non-fatal for the current request
// => Wrap/New + Print
// - Return the error to the Rest-API caller
// (also auto-prints the error to the log)
// (also auto-sends it to the error-service)
// => Wrap/New + Output
// - Print and stop the service
// (also auto-sends it to the error-service)
// => Wrap/New + Fatal
//
type Builder struct {
wrappedErr error
errorData *ExErr
containsGinData bool
containsContextData bool
noLog bool
}
func Get(err error) *Builder {
return &Builder{errorData: FromError(err)}
}
func New(t ErrorType, msg string) *Builder {
return &Builder{errorData: newExErr(CatSystem, t, msg)}
}
func Wrap(err error, msg string) *Builder {
if err == nil {
return &Builder{errorData: newExErr(CatSystem, TypeInternal, msg)} // prevent NPE if we call Wrap with err==nil
}
v := FromError(err)
if !pkgconfig.RecursiveErrors {
v.Message = msg
return &Builder{wrappedErr: err, errorData: v}
} else {
return &Builder{wrappedErr: err, errorData: wrapExErr(v, msg, CatWrap, 1)}
}
}
// ----------------------------------------------------------------------------
func (b *Builder) WithType(t ErrorType) *Builder {
b.errorData.Type = t
return b
}
func (b *Builder) WithStatuscode(status int) *Builder {
b.errorData.StatusCode = &status
return b
}
func (b *Builder) WithMessage(msg string) *Builder {
b.errorData.Message = msg
return b
}
func (b *Builder) WithSeverity(v ErrorSeverity) *Builder {
b.errorData.Severity = v
return b
}
func (b *Builder) WithCategory(v ErrorCategory) *Builder {
b.errorData.Category = v
return b
}
// ----------------------------------------------------------------------------
// Err changes the Severity to ERROR (default)
// The error will be:
//
// - On Build():
//
// - Short-Logged as Err
//
// - On Print():
//
// - Logged as Err
//
// - Send to the error-service
//
// - On Output():
//
// - Logged as Err
//
// - Send to the error-service
func (b *Builder) Err() *Builder {
b.errorData.Severity = SevErr
return b
}
// Warn changes the Severity to WARN
// The error will be:
//
// - On Build():
//
// - -(nothing)-
//
// - On Print():
//
// - Short-Logged as Warn
//
// - On Output():
//
// - Logged as Warn
func (b *Builder) Warn() *Builder {
b.errorData.Severity = SevWarn
return b
}
// Info changes the Severity to INFO
// The error will be:
//
// - On Build():
//
// - -(nothing)-
//
// - On Print():
//
// - -(nothing)-
//
// - On Output():
//
// - -(nothing)-
func (b *Builder) Info() *Builder {
b.errorData.Severity = SevInfo
return b
}
// ----------------------------------------------------------------------------
// User sets the Category to CatUser
//
// Errors with category
func (b *Builder) User() *Builder {
b.errorData.Category = CatUser
return b
}
func (b *Builder) System() *Builder {
b.errorData.Category = CatSystem
return b
}
// ----------------------------------------------------------------------------
func (b *Builder) NoLog() *Builder {
b.noLog = true
return b
}
// ----------------------------------------------------------------------------
func (b *Builder) Id(key string, val fmt.Stringer) *Builder {
return b.addMeta(key, MDTID, newIDWrap(val))
}
func (b *Builder) StrPtr(key string, val *string) *Builder {
return b.addMeta(key, MDTStringPtr, val)
}
func (b *Builder) Str(key string, val string) *Builder {
return b.addMeta(key, MDTString, val)
}
func (b *Builder) Int(key string, val int) *Builder {
return b.addMeta(key, MDTInt, val)
}
func (b *Builder) Int8(key string, val int8) *Builder {
return b.addMeta(key, MDTInt8, val)
}
func (b *Builder) Int16(key string, val int16) *Builder {
return b.addMeta(key, MDTInt16, val)
}
func (b *Builder) Int32(key string, val int32) *Builder {
return b.addMeta(key, MDTInt32, val)
}
func (b *Builder) Int64(key string, val int64) *Builder {
return b.addMeta(key, MDTInt64, val)
}
func (b *Builder) Float32(key string, val float32) *Builder {
return b.addMeta(key, MDTFloat32, val)
}
func (b *Builder) Float64(key string, val float64) *Builder {
return b.addMeta(key, MDTFloat64, val)
}
func (b *Builder) Bool(key string, val bool) *Builder {
return b.addMeta(key, MDTBool, val)
}
func (b *Builder) Bytes(key string, val []byte) *Builder {
return b.addMeta(key, MDTBytes, val)
}
func (b *Builder) ObjectID(key string, val primitive.ObjectID) *Builder {
return b.addMeta(key, MDTObjectID, val)
}
func (b *Builder) Time(key string, val time.Time) *Builder {
return b.addMeta(key, MDTTime, val)
}
func (b *Builder) Dur(key string, val time.Duration) *Builder {
return b.addMeta(key, MDTDuration, val)
}
func (b *Builder) Strs(key string, val []string) *Builder {
return b.addMeta(key, MDTStringArray, val)
}
func (b *Builder) Ints(key string, val []int) *Builder {
return b.addMeta(key, MDTIntArray, val)
}
func (b *Builder) Ints32(key string, val []int32) *Builder {
return b.addMeta(key, MDTInt32Array, val)
}
func (b *Builder) Type(key string, cls interface{}) *Builder {
return b.addMeta(key, MDTString, fmt.Sprintf("%T", cls))
}
func (b *Builder) Interface(key string, val interface{}) *Builder {
return b.addMeta(key, MDTAny, newAnyWrap(val))
}
func (b *Builder) Any(key string, val any) *Builder {
return b.addMeta(key, MDTAny, newAnyWrap(val))
}
func (b *Builder) Stringer(key string, val fmt.Stringer) *Builder {
if langext.IsNil(val) {
return b.addMeta(key, MDTString, "(!nil)")
} else {
return b.addMeta(key, MDTString, val.String())
}
}
func (b *Builder) Enum(key string, val enums.Enum) *Builder {
return b.addMeta(key, MDTEnum, newEnumWrap(val))
}
func (b *Builder) Stack() *Builder {
return b.addMeta("@Stack", MDTString, string(debug.Stack()))
}
func (b *Builder) Errs(key string, val []error) *Builder {
for i, valerr := range val {
b.addMeta(fmt.Sprintf("%v[%v]", key, i), MDTString, Get(valerr).errorData.FormatLog(LogPrintFull))
}
return b
}
func (b *Builder) GinReq(ctx context.Context, g *gin.Context, req *http.Request) *Builder {
if v := ctx.Value("start_timestamp"); v != nil {
if t, ok := v.(time.Time); ok {
b.Time("ctx_startTimestamp", t)
b.Time("ctx_endTimestamp", time.Now())
}
}
b.Str("gin_method", req.Method)
b.Str("gin_host", req.Host)
b.Str("gin_path", g.FullPath())
b.Strs("gin_header", extractHeader(g.Request.Header))
if req.URL != nil {
b.Str("gin_url", req.URL.String())
}
if ctxVal := g.GetString("apiversion"); ctxVal != "" {
b.Str("gin_context_apiversion", ctxVal)
}
if ctxVal := g.GetString("uid"); ctxVal != "" {
b.Str("gin_context_uid", ctxVal)
}
if ctxVal := g.GetString("fcmId"); ctxVal != "" {
b.Str("gin_context_fcmid", ctxVal)
}
if ctxVal := g.GetString("reqid"); ctxVal != "" {
b.Str("gin_context_reqid", ctxVal)
}
if req.Method != "GET" && req.Body != nil {
if req.Header.Get("Content-Type") == "application/json" {
if brc, ok := req.Body.(dataext.BufferedReadCloser); ok {
if bin, err := brc.BufferedAll(); err == nil {
if len(bin) < 16*1024 {
var prettyJSON bytes.Buffer
err = json.Indent(&prettyJSON, bin, "", " ")
if err == nil {
b.Str("gin_body", string(prettyJSON.Bytes()))
} else {
b.Bytes("gin_body", bin)
}
} else {
b.Str("gin_body", fmt.Sprintf("[[%v bytes | %s]]", len(bin), req.Header.Get("Content-Type")))
}
}
}
}
if req.Header.Get("Content-Type") == "multipart/form-data" || req.Header.Get("Content-Type") == "x-www-form-urlencoded" {
if brc, ok := req.Body.(dataext.BufferedReadCloser); ok {
if bin, err := brc.BufferedAll(); err == nil {
if len(bin) < 16*1024 {
b.Bytes("gin_body", bin)
} else {
b.Str("gin_body", fmt.Sprintf("[[%v bytes | %s]]", len(bin), req.Header.Get("Content-Type")))
}
}
}
}
}
pkgconfig.ExtendGinMeta(ctx, b, g, req)
b.containsGinData = true
return b
}
func (b *Builder) CtxData(method Method, ctx context.Context) *Builder {
pkgconfig.ExtendContextMeta(b, method, ctx)
b.containsContextData = true
return b
}
func extractHeader(header map[string][]string) []string {
r := make([]string, 0, len(header))
for k, v := range header {
for _, hval := range v {
value := hval
value = strings.ReplaceAll(value, "\n", "\\n")
value = strings.ReplaceAll(value, "\r", "\\r")
value = strings.ReplaceAll(value, "\t", "\\t")
r = append(r, k+": "+value)
}
}
return r
}
// ----------------------------------------------------------------------------
// Extra adds additional data to the error
// this is not like the other metadata (like Id(), Str(), etc)
// this data is public and will be printed/outputted
func (b *Builder) Extra(key string, val any) *Builder {
b.errorData.Extra[key] = val
return b
}
// ----------------------------------------------------------------------------
// Build creates a new error, ready to pass up the stack
// If the errors is not SevWarn or SevInfo it gets also logged (in short form, without stacktrace) onto stdout
// Can be gloablly configured with ZeroLogErrTraces and ZeroLogAllTraces
// Can be locally suppressed with Builder.NoLog()
func (b *Builder) Build(ctxs ...context.Context) error {
return b.BuildAsExerr(ctxs...)
}
func (b *Builder) BuildAsExerr(ctxs ...context.Context) *ExErr {
warnOnPkgConfigNotInitialized()
for _, dctx := range ctxs {
b.CtxData(MethodBuild, dctx)
}
if pkgconfig.DisableErrorWrapping && b.wrappedErr != nil {
return FromError(b.wrappedErr)
}
if pkgconfig.ZeroLogErrTraces && !b.noLog && (b.errorData.Severity == SevErr || b.errorData.Severity == SevFatal) {
b.errorData.ShortLog(pkgconfig.ZeroLogger.Error())
} else if pkgconfig.ZeroLogAllTraces && !b.noLog {
b.errorData.ShortLog(pkgconfig.ZeroLogger.Error())
}
b.errorData.CallListener(MethodBuild, ListenerOpt{NoLog: b.noLog})
return b.errorData
}
// Output prints the error onto the gin stdout.
// The error also gets printed to stdout/stderr
// If the error is SevErr|SevFatal we also send it to the error-service
func (b *Builder) Output(ctx context.Context, g *gin.Context) {
warnOnPkgConfigNotInitialized()
if !b.containsGinData && g.Request != nil {
// Auto-Add gin metadata if the caller hasn't already done it
b.GinReq(ctx, g, g.Request)
}
b.CtxData(MethodOutput, ctx)
// this is only here to add one level to the trace
// so that .Build() and .Output() and .Print() have the same depth and our stack-skip logger can have the same skip-count
b.doOutput(ctx, g)
}
func (b *Builder) doOutput(ctx context.Context, g *gin.Context) {
b.errorData.Output(g)
if (b.errorData.Severity == SevErr || b.errorData.Severity == SevFatal) && (pkgconfig.ZeroLogErrGinOutput || pkgconfig.ZeroLogAllGinOutput) {
b.errorData.Log(pkgconfig.ZeroLogger.Error())
} else if (b.errorData.Severity == SevWarn) && (pkgconfig.ZeroLogAllGinOutput) {
b.errorData.Log(pkgconfig.ZeroLogger.Warn())
}
b.errorData.CallListener(MethodOutput, ListenerOpt{NoLog: b.noLog})
}
// Print prints the error
// If the error is SevErr we also send it to the error-service
func (b *Builder) Print(ctxs ...context.Context) Proxy {
warnOnPkgConfigNotInitialized()
for _, dctx := range ctxs {
b.CtxData(MethodPrint, dctx)
}
// this is only here to add one level to the trace
// so that .Build() and .Output() and .Print() have the same depth and our stack-skip logger can have the same skip-count
return b.doPrint()
}
func (b *Builder) doPrint() Proxy {
if b.errorData.Severity == SevErr || b.errorData.Severity == SevFatal {
b.errorData.Log(pkgconfig.ZeroLogger.Error())
} else if b.errorData.Severity == SevWarn {
b.errorData.ShortLog(pkgconfig.ZeroLogger.Warn())
} else if b.errorData.Severity == SevInfo {
b.errorData.ShortLog(pkgconfig.ZeroLogger.Info())
} else {
b.errorData.ShortLog(pkgconfig.ZeroLogger.Debug())
}
b.errorData.CallListener(MethodPrint, ListenerOpt{NoLog: b.noLog})
return Proxy{v: *b.errorData} // we return Proxy<Exerr> here instead of Exerr to prevent warnings on ignored err-returns
}
func (b *Builder) Format(level LogPrintLevel) string {
return b.errorData.FormatLog(level)
}
// Fatal prints the error and terminates the program
// If the error is SevErr we also send it to the error-service
func (b *Builder) Fatal(ctxs ...context.Context) {
b.errorData.Severity = SevFatal
for _, dctx := range ctxs {
b.CtxData(MethodFatal, dctx)
}
// this is only here to add one level to the trace
// so that .Build() and .Output() and .Print() have the same depth and our stack-skip logger can have the same skip-count
b.doLogFatal()
b.errorData.CallListener(MethodFatal, ListenerOpt{NoLog: b.noLog})
os.Exit(1)
}
func (b *Builder) doLogFatal() {
b.errorData.Log(pkgconfig.ZeroLogger.WithLevel(zerolog.FatalLevel))
}
// ----------------------------------------------------------------------------
func (b *Builder) addMeta(key string, mdtype metaDataType, val interface{}) *Builder {
b.errorData.Meta.add(key, mdtype, val)
return b
}

274
exerr/constructor.go Normal file
View File

@@ -0,0 +1,274 @@
package exerr
import (
"encoding/json"
"fmt"
"go.mongodb.org/mongo-driver/bson/primitive"
"git.blackforestbytes.com/BlackForestBytes/goext/langext"
"reflect"
"time"
)
var reflectTypeStr = reflect.TypeOf("")
func FromError(err error) *ExErr {
if err == nil {
// prevent NPE if we call FromError with err==nil
return &ExErr{
UniqueID: newID(),
Category: CatForeign,
Type: TypeInternal,
Severity: SevErr,
Timestamp: time.Time{},
StatusCode: nil,
Message: "",
WrappedErrType: "nil",
WrappedErr: err,
Caller: "",
OriginalError: nil,
Meta: make(MetaMap),
Extra: make(map[string]any),
}
}
//goland:noinspection GoTypeAssertionOnErrors
if verr, ok := err.(*ExErr); ok {
// A simple ExErr
return verr
}
//goland:noinspection GoTypeAssertionOnErrors
if verr, ok := err.(langext.PanicWrappedErr); ok {
return &ExErr{
UniqueID: newID(),
Category: CatForeign,
Type: TypePanic,
Severity: SevErr,
Timestamp: time.Time{},
StatusCode: nil,
Message: "A panic occured",
WrappedErrType: fmt.Sprintf("%T", verr),
WrappedErr: err,
Caller: "",
OriginalError: nil,
Meta: MetaMap{
"panic_object": {DataType: MDTString, Value: fmt.Sprintf("%+v", verr.RecoveredObj())},
"panic_type": {DataType: MDTString, Value: fmt.Sprintf("%T", verr.RecoveredObj())},
"stack": {DataType: MDTString, Value: verr.Stack},
},
Extra: make(map[string]any),
}
}
//goland:noinspection GoTypeAssertionOnErrors
if verr, ok := err.(*langext.PanicWrappedErr); ok && verr != nil {
return &ExErr{
UniqueID: newID(),
Category: CatForeign,
Type: TypePanic,
Severity: SevErr,
Timestamp: time.Time{},
StatusCode: nil,
Message: "A panic occured",
WrappedErrType: fmt.Sprintf("%T", verr),
WrappedErr: err,
Caller: "",
OriginalError: nil,
Meta: MetaMap{
"panic_object": {DataType: MDTString, Value: fmt.Sprintf("%+v", verr.RecoveredObj())},
"panic_type": {DataType: MDTString, Value: fmt.Sprintf("%T", verr.RecoveredObj())},
"stack": {DataType: MDTString, Value: verr.Stack},
},
Extra: make(map[string]any),
}
}
// A foreign error (eg a MongoDB exception)
return &ExErr{
UniqueID: newID(),
Category: CatForeign,
Type: TypeInternal,
Severity: SevErr,
Timestamp: time.Time{},
StatusCode: nil,
Message: err.Error(),
WrappedErrType: fmt.Sprintf("%T", err),
WrappedErr: err,
Caller: "",
OriginalError: nil,
Meta: getForeignMeta(err),
Extra: make(map[string]any),
}
}
func newExErr(cat ErrorCategory, errtype ErrorType, msg string) *ExErr {
return &ExErr{
UniqueID: newID(),
Category: cat,
Type: errtype,
Severity: SevErr,
Timestamp: time.Now(),
StatusCode: nil,
Message: msg,
WrappedErrType: "",
WrappedErr: nil,
Caller: callername(2),
OriginalError: nil,
Meta: make(map[string]MetaValue),
Extra: make(map[string]any),
}
}
func wrapExErr(e *ExErr, msg string, cat ErrorCategory, stacktraceskip int) *ExErr {
return &ExErr{
UniqueID: newID(),
Category: cat,
Type: TypeWrap,
Severity: e.Severity,
Timestamp: time.Now(),
StatusCode: e.StatusCode,
Message: msg,
WrappedErrType: "",
WrappedErr: nil,
Caller: callername(1 + stacktraceskip),
OriginalError: e,
Meta: make(map[string]MetaValue),
Extra: langext.CopyMap(langext.ForceMap(e.Extra)),
}
}
func getForeignMeta(err error) (mm MetaMap) {
mm = make(map[string]MetaValue)
defer func() {
if panicerr := recover(); panicerr != nil {
New(TypePanic, "Panic while trying to get foreign meta").
Str("source", err.Error()).
Interface("panic-object", panicerr).
Stack().
Print()
}
}()
rval := reflect.ValueOf(err)
if rval.Kind() == reflect.Interface || rval.Kind() == reflect.Ptr {
rval = reflect.ValueOf(err).Elem()
}
mm.add("foreign.errortype", MDTString, rval.Type().String())
for k, v := range addMetaPrefix("foreign", getReflectedMetaValues(err, 8)) {
mm[k] = v
}
return mm
}
func getReflectedMetaValues(value interface{}, remainingDepth int) map[string]MetaValue {
if remainingDepth <= 0 {
return map[string]MetaValue{}
}
if langext.IsNil(value) {
return map[string]MetaValue{"": {DataType: MDTNil, Value: nil}}
}
rval := reflect.ValueOf(value)
if rval.Type().Kind() == reflect.Ptr {
if rval.IsNil() {
return map[string]MetaValue{"*": {DataType: MDTNil, Value: nil}}
}
elem := rval.Elem()
return addMetaPrefix("*", getReflectedMetaValues(elem.Interface(), remainingDepth-1))
}
if !rval.CanInterface() {
return map[string]MetaValue{"": {DataType: MDTString, Value: "<<no-interface>>"}}
}
raw := rval.Interface()
switch ifraw := raw.(type) {
case time.Time:
return map[string]MetaValue{"": {DataType: MDTTime, Value: ifraw}}
case time.Duration:
return map[string]MetaValue{"": {DataType: MDTDuration, Value: ifraw}}
case int:
return map[string]MetaValue{"": {DataType: MDTInt, Value: ifraw}}
case int8:
return map[string]MetaValue{"": {DataType: MDTInt8, Value: ifraw}}
case int16:
return map[string]MetaValue{"": {DataType: MDTInt16, Value: ifraw}}
case int32:
return map[string]MetaValue{"": {DataType: MDTInt32, Value: ifraw}}
case int64:
return map[string]MetaValue{"": {DataType: MDTInt64, Value: ifraw}}
case string:
return map[string]MetaValue{"": {DataType: MDTString, Value: ifraw}}
case bool:
return map[string]MetaValue{"": {DataType: MDTBool, Value: ifraw}}
case []byte:
return map[string]MetaValue{"": {DataType: MDTBytes, Value: ifraw}}
case float32:
return map[string]MetaValue{"": {DataType: MDTFloat32, Value: ifraw}}
case float64:
return map[string]MetaValue{"": {DataType: MDTFloat64, Value: ifraw}}
case []int:
return map[string]MetaValue{"": {DataType: MDTIntArray, Value: ifraw}}
case []int32:
return map[string]MetaValue{"": {DataType: MDTInt32Array, Value: ifraw}}
case primitive.ObjectID:
return map[string]MetaValue{"": {DataType: MDTObjectID, Value: ifraw}}
case []string:
return map[string]MetaValue{"": {DataType: MDTStringArray, Value: ifraw}}
}
if rval.Type().Kind() == reflect.Struct {
m := make(map[string]MetaValue)
for i := 0; i < rval.NumField(); i++ {
fieldtype := rval.Type().Field(i)
fieldname := fieldtype.Name
if fieldtype.IsExported() {
for k, v := range addMetaPrefix(fieldname, getReflectedMetaValues(rval.Field(i).Interface(), remainingDepth-1)) {
m[k] = v
}
}
}
return m
}
if rval.Type().ConvertibleTo(reflectTypeStr) {
return map[string]MetaValue{"": {DataType: MDTString, Value: rval.Convert(reflectTypeStr).String()}}
}
jsonval, err := json.Marshal(value)
if err != nil {
return map[string]MetaValue{"": {DataType: MDTString, Value: fmt.Sprintf("Failed to Marshal %T:\n%+v", value, value)}}
}
return map[string]MetaValue{"": {DataType: MDTString, Value: string(jsonval)}}
}
func addMetaPrefix(prefix string, m map[string]MetaValue) map[string]MetaValue {
if len(m) == 1 {
for k, v := range m {
if k == "" {
return map[string]MetaValue{prefix: v}
}
}
}
r := make(map[string]MetaValue, len(m))
for k, v := range m {
r[prefix+"."+k] = v
}
return r
}

18
exerr/data.go Normal file
View File

@@ -0,0 +1,18 @@
package exerr
type Method string
const (
MethodOutput Method = "OUTPUT"
MethodPrint Method = "PRINT"
MethodBuild Method = "BUILD"
MethodFatal Method = "FATAL"
)
type LogPrintLevel string
const (
LogPrintFull LogPrintLevel = "Full"
LogPrintOverview LogPrintLevel = "Overview"
LogPrintShort LogPrintLevel = "Short"
)

89
exerr/dataCategory.go Normal file
View File

@@ -0,0 +1,89 @@
package exerr
import (
"encoding/json"
"errors"
"fmt"
"go.mongodb.org/mongo-driver/bson"
"go.mongodb.org/mongo-driver/bson/bsoncodec"
"go.mongodb.org/mongo-driver/bson/bsonrw"
"go.mongodb.org/mongo-driver/bson/bsontype"
"reflect"
)
type ErrorCategory struct{ Category string }
var (
CatWrap = ErrorCategory{"Wrap"} // The error is simply wrapping another error (e.g. when a grpc call returns an error)
CatSystem = ErrorCategory{"System"} // An internal system error (e.g. connection to db failed)
CatUser = ErrorCategory{"User"} // The user (the API caller) did something wrong (e.g. he has no permissions to do this)
CatForeign = ErrorCategory{"Foreign"} // A foreign error that some component threw (e.g. an unknown mongodb error), happens if we call Wrap(..) on an non-bmerror value
)
func (e *ErrorCategory) UnmarshalJSON(bytes []byte) error {
return json.Unmarshal(bytes, &e.Category)
}
func (e ErrorCategory) MarshalJSON() ([]byte, error) {
return json.Marshal(e.Category)
}
func (e *ErrorCategory) UnmarshalBSONValue(bt bsontype.Type, data []byte) error {
if bt == bson.TypeNull {
// we can't set nil in UnmarshalBSONValue (so we use default(struct))
// Use mongoext.CreateGoExtBsonRegistry if you need to unmarsh pointer values
// https://stackoverflow.com/questions/75167597
// https://jira.mongodb.org/browse/GODRIVER-2252
*e = ErrorCategory{}
return nil
}
if bt != bson.TypeString {
return errors.New(fmt.Sprintf("cannot unmarshal %v into String", bt))
}
var tt string
err := bson.RawValue{Type: bt, Value: data}.Unmarshal(&tt)
if err != nil {
return err
}
*e = ErrorCategory{tt}
return nil
}
func (e ErrorCategory) MarshalBSONValue() (bsontype.Type, []byte, error) {
return bson.MarshalValue(e.Category)
}
func (e ErrorCategory) DecodeValue(dc bsoncodec.DecodeContext, vr bsonrw.ValueReader, val reflect.Value) error {
if val.Kind() == reflect.Ptr && val.IsNil() {
if !val.CanSet() {
return errors.New("ValueUnmarshalerDecodeValue")
}
val.Set(reflect.New(val.Type().Elem()))
}
tp, src, err := bsonrw.Copier{}.CopyValueToBytes(vr)
if err != nil {
return err
}
if val.Kind() == reflect.Ptr && len(src) == 0 {
val.Set(reflect.Zero(val.Type()))
return nil
}
err = e.UnmarshalBSONValue(tp, src)
if err != nil {
return err
}
if val.Kind() == reflect.Ptr {
val.Set(reflect.ValueOf(&e))
} else {
val.Set(reflect.ValueOf(e))
}
return nil
}
//goland:noinspection GoUnusedGlobalVariable
var AllCategories = []ErrorCategory{CatWrap, CatSystem, CatUser, CatForeign}

91
exerr/dataSeverity.go Normal file
View File

@@ -0,0 +1,91 @@
package exerr
import (
"encoding/json"
"errors"
"fmt"
"go.mongodb.org/mongo-driver/bson"
"go.mongodb.org/mongo-driver/bson/bsoncodec"
"go.mongodb.org/mongo-driver/bson/bsonrw"
"go.mongodb.org/mongo-driver/bson/bsontype"
"reflect"
)
type ErrorSeverity struct{ Severity string }
var (
SevTrace = ErrorSeverity{"Trace"}
SevDebug = ErrorSeverity{"Debug"}
SevInfo = ErrorSeverity{"Info"}
SevWarn = ErrorSeverity{"Warn"}
SevErr = ErrorSeverity{"Err"}
SevFatal = ErrorSeverity{"Fatal"}
)
func (e *ErrorSeverity) UnmarshalJSON(bytes []byte) error {
return json.Unmarshal(bytes, &e.Severity)
}
func (e ErrorSeverity) MarshalJSON() ([]byte, error) {
return json.Marshal(e.Severity)
}
func (e *ErrorSeverity) UnmarshalBSONValue(bt bsontype.Type, data []byte) error {
if bt == bson.TypeNull {
// we can't set nil in UnmarshalBSONValue (so we use default(struct))
// Use mongoext.CreateGoExtBsonRegistry if you need to unmarsh pointer values
// https://stackoverflow.com/questions/75167597
// https://jira.mongodb.org/browse/GODRIVER-2252
*e = ErrorSeverity{}
return nil
}
if bt != bson.TypeString {
return errors.New(fmt.Sprintf("cannot unmarshal %v into String", bt))
}
var tt string
err := bson.RawValue{Type: bt, Value: data}.Unmarshal(&tt)
if err != nil {
return err
}
*e = ErrorSeverity{tt}
return nil
}
func (e ErrorSeverity) MarshalBSONValue() (bsontype.Type, []byte, error) {
return bson.MarshalValue(e.Severity)
}
func (e ErrorSeverity) DecodeValue(dc bsoncodec.DecodeContext, vr bsonrw.ValueReader, val reflect.Value) error {
if val.Kind() == reflect.Ptr && val.IsNil() {
if !val.CanSet() {
return errors.New("ValueUnmarshalerDecodeValue")
}
val.Set(reflect.New(val.Type().Elem()))
}
tp, src, err := bsonrw.Copier{}.CopyValueToBytes(vr)
if err != nil {
return err
}
if val.Kind() == reflect.Ptr && len(src) == 0 {
val.Set(reflect.Zero(val.Type()))
return nil
}
err = e.UnmarshalBSONValue(tp, src)
if err != nil {
return err
}
if val.Kind() == reflect.Ptr {
val.Set(reflect.ValueOf(&e))
} else {
val.Set(reflect.ValueOf(e))
}
return nil
}
//goland:noinspection GoUnusedGlobalVariable
var AllSeverities = []ErrorSeverity{SevTrace, SevDebug, SevInfo, SevWarn, SevErr, SevFatal}

156
exerr/dataType.go Normal file
View File

@@ -0,0 +1,156 @@
package exerr
import (
"encoding/json"
"errors"
"fmt"
"go.mongodb.org/mongo-driver/bson"
"go.mongodb.org/mongo-driver/bson/bsoncodec"
"go.mongodb.org/mongo-driver/bson/bsonrw"
"go.mongodb.org/mongo-driver/bson/bsontype"
"git.blackforestbytes.com/BlackForestBytes/goext/dataext"
"git.blackforestbytes.com/BlackForestBytes/goext/langext"
"reflect"
)
type ErrorType struct {
Key string
DefaultStatusCode *int
}
//goland:noinspection GoUnusedGlobalVariable
var (
TypeInternal = NewType("INTERNAL_ERROR", langext.Ptr(500))
TypePanic = NewType("PANIC", langext.Ptr(500))
TypeNotImplemented = NewType("NOT_IMPLEMENTED", langext.Ptr(500))
TypeAssert = NewType("ASSERT", langext.Ptr(500))
TypeMongoQuery = NewType("MONGO_QUERY", langext.Ptr(500))
TypeCursorTokenDecode = NewType("CURSOR_TOKEN_DECODE", langext.Ptr(500))
TypeMongoFilter = NewType("MONGO_FILTER", langext.Ptr(500))
TypeMongoReflection = NewType("MONGO_REFLECTION", langext.Ptr(500))
TypeMongoInvalidOpt = NewType("MONGO_INVALIDOPT", langext.Ptr(500))
TypeSQLQuery = NewType("SQL_QUERY", langext.Ptr(500))
TypeSQLBuild = NewType("SQL_BUILD", langext.Ptr(500))
TypeSQLDecode = NewType("SQL_DECODE", langext.Ptr(500))
TypeWrap = NewType("Wrap", nil)
TypeBindFailURI = NewType("BINDFAIL_URI", langext.Ptr(400))
TypeBindFailQuery = NewType("BINDFAIL_QUERY", langext.Ptr(400))
TypeBindFailJSON = NewType("BINDFAIL_JSON", langext.Ptr(400))
TypeBindFailFormData = NewType("BINDFAIL_FORMDATA", langext.Ptr(400))
TypeBindFailHeader = NewType("BINDFAIL_HEADER", langext.Ptr(400))
TypeMarshalEntityID = NewType("MARSHAL_ENTITY_ID", langext.Ptr(400))
TypeInvalidCSID = NewType("INVALID_CSID", langext.Ptr(400))
TypeGoogleStatuscode = NewType("GOOGLE_STATUSCODE", langext.Ptr(400))
TypeGoogleResponse = NewType("GOOGLE_RESPONSE", langext.Ptr(400))
TypeUnauthorized = NewType("UNAUTHORIZED", langext.Ptr(401))
TypeAuthFailed = NewType("AUTH_FAILED", langext.Ptr(401))
TypeInvalidImage = NewType("IMAGEEXT_INVALID_IMAGE", langext.Ptr(400))
TypeInvalidMimeType = NewType("IMAGEEXT_INVALID_MIMETYPE", langext.Ptr(400))
// other values come from the downstream application that uses goext
)
func (e *ErrorType) UnmarshalJSON(bytes []byte) error {
var k string
err := json.Unmarshal(bytes, &k)
if err != nil {
return err
}
if d, ok := registeredTypes.Get(k); ok {
*e = d
return nil
} else {
*e = ErrorType{k, nil}
return nil
}
}
func (e ErrorType) MarshalJSON() ([]byte, error) {
return json.Marshal(e.Key)
}
func (e *ErrorType) UnmarshalBSONValue(bt bsontype.Type, data []byte) error {
if bt == bson.TypeNull {
// we can't set nil in UnmarshalBSONValue (so we use default(struct))
// Use mongoext.CreateGoExtBsonRegistry if you need to unmarsh pointer values
// https://stackoverflow.com/questions/75167597
// https://jira.mongodb.org/browse/GODRIVER-2252
*e = ErrorType{}
return nil
}
if bt != bson.TypeString {
return errors.New(fmt.Sprintf("cannot unmarshal %v into String", bt))
}
var tt string
err := bson.RawValue{Type: bt, Value: data}.Unmarshal(&tt)
if err != nil {
return err
}
if d, ok := registeredTypes.Get(tt); ok {
*e = d
return nil
} else {
*e = ErrorType{tt, nil}
return nil
}
}
func (e ErrorType) MarshalBSONValue() (bsontype.Type, []byte, error) {
return bson.MarshalValue(e.Key)
}
func (e ErrorType) DecodeValue(dc bsoncodec.DecodeContext, vr bsonrw.ValueReader, val reflect.Value) error {
if val.Kind() == reflect.Ptr && val.IsNil() {
if !val.CanSet() {
return errors.New("ValueUnmarshalerDecodeValue")
}
val.Set(reflect.New(val.Type().Elem()))
}
tp, src, err := bsonrw.Copier{}.CopyValueToBytes(vr)
if err != nil {
return err
}
if val.Kind() == reflect.Ptr && len(src) == 0 {
val.Set(reflect.Zero(val.Type()))
return nil
}
err = e.UnmarshalBSONValue(tp, src)
if err != nil {
return err
}
if val.Kind() == reflect.Ptr {
val.Set(reflect.ValueOf(&e))
} else {
val.Set(reflect.ValueOf(e))
}
return nil
}
var registeredTypes = dataext.SyncMap[string, ErrorType]{}
func NewType(key string, defStatusCode *int) ErrorType {
et := ErrorType{key, defStatusCode}
registeredTypes.Set(key, et)
return et
}
func ListRegisteredTypes() []ErrorType {
return registeredTypes.GetAllValues()
}

153
exerr/data_test.go Normal file
View File

@@ -0,0 +1,153 @@
package exerr
import (
"context"
"encoding/json"
"go.mongodb.org/mongo-driver/bson"
"go.mongodb.org/mongo-driver/bson/primitive"
"go.mongodb.org/mongo-driver/mongo"
"git.blackforestbytes.com/BlackForestBytes/goext/tst"
"testing"
"time"
)
func TestJSONMarshalErrorCategory(t *testing.T) {
c1 := CatSystem
jsonbin := tst.Must(json.Marshal(c1))(t)
var c2 ErrorCategory
tst.AssertNoErr(t, json.Unmarshal(jsonbin, &c2))
tst.AssertEqual(t, c1, c2)
tst.AssertEqual(t, string(jsonbin), "\"System\"")
}
func TestJSONMarshalErrorSeverity(t *testing.T) {
c1 := SevErr
jsonbin := tst.Must(json.Marshal(c1))(t)
var c2 ErrorSeverity
tst.AssertNoErr(t, json.Unmarshal(jsonbin, &c2))
tst.AssertEqual(t, c1, c2)
tst.AssertEqual(t, string(jsonbin), "\"Err\"")
}
func TestJSONMarshalErrorType(t *testing.T) {
c1 := TypeNotImplemented
jsonbin := tst.Must(json.Marshal(c1))(t)
var c2 ErrorType
tst.AssertNoErr(t, json.Unmarshal(jsonbin, &c2))
tst.AssertEqual(t, c1, c2)
tst.AssertEqual(t, string(jsonbin), "\"NOT_IMPLEMENTED\"")
}
func TestBSONMarshalErrorCategory(t *testing.T) {
ctx, cancel := context.WithTimeout(context.Background(), 350*time.Millisecond)
defer cancel()
client, err := mongo.Connect(ctx)
if err != nil {
t.Skip("Skip test - no local mongo found")
return
}
err = client.Ping(ctx, nil)
if err != nil {
t.Skip("Skip test - no local mongo found")
return
}
primimd := primitive.NewObjectID()
_, err = client.Database("_test").Collection("goext-cicd").InsertOne(ctx, bson.M{"_id": primimd, "val": CatSystem})
tst.AssertNoErr(t, err)
cursor := client.Database("_test").Collection("goext-cicd").FindOne(ctx, bson.M{"_id": primimd, "val": bson.M{"$type": "string"}})
var c1 struct {
ID primitive.ObjectID `bson:"_id"`
Val ErrorCategory `bson:"val"`
}
err = cursor.Decode(&c1)
tst.AssertNoErr(t, err)
tst.AssertEqual(t, c1.Val, CatSystem)
}
func TestBSONMarshalErrorSeverity(t *testing.T) {
ctx, cancel := context.WithTimeout(context.Background(), 350*time.Millisecond)
defer cancel()
client, err := mongo.Connect(ctx)
if err != nil {
t.Skip("Skip test - no local mongo found")
return
}
err = client.Ping(ctx, nil)
if err != nil {
t.Skip("Skip test - no local mongo found")
return
}
primimd := primitive.NewObjectID()
_, err = client.Database("_test").Collection("goext-cicd").InsertOne(ctx, bson.M{"_id": primimd, "val": SevErr})
tst.AssertNoErr(t, err)
cursor := client.Database("_test").Collection("goext-cicd").FindOne(ctx, bson.M{"_id": primimd, "val": bson.M{"$type": "string"}})
var c1 struct {
ID primitive.ObjectID `bson:"_id"`
Val ErrorSeverity `bson:"val"`
}
err = cursor.Decode(&c1)
tst.AssertNoErr(t, err)
tst.AssertEqual(t, c1.Val, SevErr)
}
func TestBSONMarshalErrorType(t *testing.T) {
ctx, cancel := context.WithTimeout(context.Background(), 350*time.Millisecond)
defer cancel()
client, err := mongo.Connect(ctx)
if err != nil {
t.Skip("Skip test - no local mongo found")
return
}
err = client.Ping(ctx, nil)
if err != nil {
t.Skip("Skip test - no local mongo found")
return
}
primimd := primitive.NewObjectID()
_, err = client.Database("_test").Collection("goext-cicd").InsertOne(ctx, bson.M{"_id": primimd, "val": TypeNotImplemented})
tst.AssertNoErr(t, err)
cursor := client.Database("_test").Collection("goext-cicd").FindOne(ctx, bson.M{"_id": primimd, "val": bson.M{"$type": "string"}})
var c1 struct {
ID primitive.ObjectID `bson:"_id"`
Val ErrorType `bson:"val"`
}
err = cursor.Decode(&c1)
tst.AssertNoErr(t, err)
tst.AssertEqual(t, c1.Val, TypeNotImplemented)
}

138
exerr/errinit.go Normal file
View File

@@ -0,0 +1,138 @@
package exerr
import (
"context"
"fmt"
"git.blackforestbytes.com/BlackForestBytes/goext/langext"
"github.com/gin-gonic/gin"
"github.com/rs/zerolog"
"net/http"
"os"
)
type ErrorPackageConfig struct {
ZeroLogErrTraces bool // autom print zerolog logs on .Build() (for SevErr and SevFatal)
ZeroLogAllTraces bool // autom print zerolog logs on .Build() (for all Severities)
RecursiveErrors bool // errors contains their Origin-Error
ExtendedGinOutput bool // Log extended data (trace, meta, ...) to gin in err.Output()
IncludeMetaInGinOutput bool // Log meta fields ( from e.g. `.Str(key, val).Build()` ) to gin in err.Output()
ExtendGinOutput func(err *ExErr, json map[string]any) // (Optionally) extend the gin output with more fields
ExtendGinDataOutput func(err *ExErr, depth int, json map[string]any) // (Optionally) extend the gin `__data` output with more fields
DisableErrorWrapping bool // Disables the exerr.Wrap()...Build() function - will always return the original error
ZeroLogErrGinOutput bool // autom print zerolog logs on ginext.Error() / .Output(gin) (for SevErr and SevFatal)
ZeroLogAllGinOutput bool // autom print zerolog logs on ginext.Error() / .Output(gin) (for all Severities)
ExtendGinMeta func(ctx context.Context, b *Builder, g *gin.Context, req *http.Request) // (Optionally) extend the final error meta values with additional data from the gin context (a few are automatically added, here more can be included)
ExtendContextMeta func(b *Builder, method Method, dctx context.Context) // (Optionally) extend the final error meta values with additional data from the context (a few are automatically added, here more can be included)
ZeroLogger zerolog.Logger // The logger used to print exerr log messages
}
type ErrorPackageConfigInit struct {
ZeroLogErrTraces *bool
ZeroLogAllTraces *bool
RecursiveErrors *bool
ExtendedGinOutput *bool
IncludeMetaInGinOutput *bool
ExtendGinOutput func(err *ExErr, json map[string]any)
ExtendGinDataOutput func(err *ExErr, depth int, json map[string]any)
DisableErrorWrapping *bool
ZeroLogErrGinOutput *bool
ZeroLogAllGinOutput *bool
ExtendGinMeta func(ctx context.Context, b *Builder, g *gin.Context, req *http.Request)
ExtendContextMeta func(b *Builder, method Method, dctx context.Context)
ZeroLogger *zerolog.Logger
}
var initialized = false
var pkgconfig = ErrorPackageConfig{
ZeroLogErrTraces: true,
ZeroLogAllTraces: false,
RecursiveErrors: true,
ExtendedGinOutput: false,
IncludeMetaInGinOutput: true,
ExtendGinOutput: func(err *ExErr, json map[string]any) {},
ExtendGinDataOutput: func(err *ExErr, depth int, json map[string]any) {},
DisableErrorWrapping: false,
ZeroLogErrGinOutput: true,
ZeroLogAllGinOutput: false,
ExtendGinMeta: func(ctx context.Context, b *Builder, g *gin.Context, req *http.Request) {},
ExtendContextMeta: func(b *Builder, method Method, dctx context.Context) {},
}
// Init initializes the exerr packages
// Must be called at the program start, before (!) any errors
// Is not thread-safe
func Init(cfg ErrorPackageConfigInit) {
if initialized {
panic("Cannot re-init error package")
}
ego := func(err *ExErr, json map[string]any) {}
egdo := func(err *ExErr, depth int, json map[string]any) {}
egm := func(ctx context.Context, b *Builder, g *gin.Context, req *http.Request) {}
egcm := func(b *Builder, method Method, dctx context.Context) {}
if cfg.ExtendGinOutput != nil {
ego = cfg.ExtendGinOutput
}
if cfg.ExtendGinDataOutput != nil {
egdo = cfg.ExtendGinDataOutput
}
if cfg.ExtendGinMeta != nil {
egm = cfg.ExtendGinMeta
}
if cfg.ExtendContextMeta != nil {
egcm = cfg.ExtendContextMeta
}
var logger zerolog.Logger
if cfg.ZeroLogger != nil {
logger = *cfg.ZeroLogger
} else {
logger = newDefaultLogger()
}
pkgconfig = ErrorPackageConfig{
ZeroLogErrTraces: langext.Coalesce(cfg.ZeroLogErrTraces, pkgconfig.ZeroLogErrTraces),
ZeroLogAllTraces: langext.Coalesce(cfg.ZeroLogAllTraces, pkgconfig.ZeroLogAllTraces),
RecursiveErrors: langext.Coalesce(cfg.RecursiveErrors, pkgconfig.RecursiveErrors),
ExtendedGinOutput: langext.Coalesce(cfg.ExtendedGinOutput, pkgconfig.ExtendedGinOutput),
IncludeMetaInGinOutput: langext.Coalesce(cfg.IncludeMetaInGinOutput, pkgconfig.IncludeMetaInGinOutput),
ExtendGinOutput: ego,
ExtendGinDataOutput: egdo,
DisableErrorWrapping: langext.Coalesce(cfg.DisableErrorWrapping, pkgconfig.DisableErrorWrapping),
ZeroLogAllGinOutput: langext.Coalesce(cfg.ZeroLogAllGinOutput, pkgconfig.ZeroLogAllGinOutput),
ZeroLogErrGinOutput: langext.Coalesce(cfg.ZeroLogErrGinOutput, pkgconfig.ZeroLogErrGinOutput),
ExtendGinMeta: egm,
ExtendContextMeta: egcm,
ZeroLogger: logger,
}
initialized = true
}
func newDefaultLogger() zerolog.Logger {
cw := zerolog.ConsoleWriter{
Out: os.Stdout,
TimeFormat: "2006-01-02 15:04:05 Z07:00",
}
multi := zerolog.MultiLevelWriter(cw)
return zerolog.New(multi).With().Timestamp().CallerWithSkipFrameCount(5).Logger()
}
func Initialized() bool {
return initialized
}
func warnOnPkgConfigNotInitialized() {
if !initialized {
fmt.Printf("\n")
fmt.Printf("%s\n", langext.StrRepeat("=", 80))
fmt.Printf("%s\n", "[WARNING] exerr package used without initializiation")
fmt.Printf("%s\n", " call exerr.Init() in your main() function")
fmt.Printf("%s\n", langext.StrRepeat("=", 80))
fmt.Printf("\n")
}
}

436
exerr/exerr.go Normal file
View File

@@ -0,0 +1,436 @@
package exerr
import (
"fmt"
"github.com/rs/xid"
"github.com/rs/zerolog"
"git.blackforestbytes.com/BlackForestBytes/goext/langext"
"reflect"
"strings"
"time"
)
type ExErr struct {
UniqueID string `json:"uniqueID"`
Timestamp time.Time `json:"timestamp"`
Category ErrorCategory `json:"category"`
Severity ErrorSeverity `json:"severity"`
Type ErrorType `json:"type"`
StatusCode *int `json:"statusCode"`
Message string `json:"message"`
WrappedErrType string `json:"wrappedErrType"`
WrappedErr any `json:"-"`
Caller string `json:"caller"`
OriginalError *ExErr `json:"originalError"`
Extra map[string]any `json:"extra"`
Meta MetaMap `json:"meta"`
}
func (ee *ExErr) Error() string {
return ee.RecursiveMessage()
}
// Unwrap must be implemented so that some error.XXX methods work
func (ee *ExErr) Unwrap() error {
if ee.OriginalError == nil {
if ee.WrappedErr != nil {
if werr, ok := ee.WrappedErr.(error); ok {
return werr
}
}
return nil // this is neccessary - otherwise we return a wrapped nil and the `x == nil` comparison fails (= panic in errors.Is and other failures)
}
return ee.OriginalError
}
// Is must be implemented so that error.Is(x) works
func (ee *ExErr) Is(e error) bool {
return IsFrom(ee, e)
}
// As must be implemented so that error.As(x) works
//
//goland:noinspection GoTypeAssertionOnErrors
func (ee *ExErr) As(target any) bool {
if dstErr, ok := target.(*ExErr); ok {
if dst0, ok := ee.contains(dstErr); ok {
dstErr = dst0
return true
} else {
return false
}
} else {
val := reflect.ValueOf(target)
typStr := val.Type().Elem().String()
for curr := ee; curr != nil; curr = curr.OriginalError {
if curr.Category == CatForeign && curr.WrappedErrType == typStr && curr.WrappedErr != nil {
val.Elem().Set(reflect.ValueOf(curr.WrappedErr))
return true
}
}
return false
}
}
func (ee *ExErr) Log(evt *zerolog.Event) {
evt.Msg(ee.FormatLog(LogPrintFull))
}
func (ee *ExErr) FormatLog(lvl LogPrintLevel) string {
// [LogPrintShort]
//
// - Only print message and type
// - Used e.g. for logging to the console when Build is called
// - also used in Print() if level == Warn/Info
//
// [LogPrintOverview]
//
// - print message, extra and errortrace
//
// [LogPrintFull]
//
// - print full error, with meta and extra, and trace, etc
// - Used in Output() and Print()
//
if lvl == LogPrintShort {
msg := ee.Message
if msg == "" {
msg = ee.RecursiveMessage()
}
if ee.OriginalError != nil && ee.OriginalError.Category == CatForeign {
msg = msg + " (" + strings.ReplaceAll(ee.OriginalError.Message, "\n", " ") + ")"
}
if ee.Type != TypeWrap {
return "[" + ee.Type.Key + "] " + msg
} else {
return msg
}
} else if lvl == LogPrintOverview {
str := "[" + ee.RecursiveType().Key + "] <" + ee.UniqueID + "> " + strings.ReplaceAll(ee.RecursiveMessage(), "\n", " ") + "\n"
for exk, exv := range ee.Extra {
str += fmt.Sprintf(" # [[[ %s ==> %v ]]]\n", exk, exv)
}
indent := ""
for curr := ee; curr != nil; curr = curr.OriginalError {
indent += " "
str += indent
str += "-> "
strmsg := strings.Trim(curr.Message, " \r\n\t")
if lbidx := strings.Index(curr.Message, "\n"); lbidx >= 0 {
strmsg = strmsg[0:lbidx]
}
strmsg = langext.StrLimit(strmsg, 61, "...")
str += strmsg
str += "\n"
}
return str
} else if lvl == LogPrintFull {
str := "[" + ee.RecursiveType().Key + "] <" + ee.UniqueID + "> " + strings.ReplaceAll(ee.RecursiveMessage(), "\n", " ") + "\n"
for exk, exv := range ee.Extra {
str += fmt.Sprintf(" # [[[ %s ==> %v ]]]\n", exk, exv)
}
indent := ""
for curr := ee; curr != nil; curr = curr.OriginalError {
indent += " "
etype := curr.Type.Key
if curr.Type == TypeWrap {
etype = "~"
}
str += indent
str += "-> ["
str += etype
if curr.Category == CatForeign {
str += "|Foreign"
}
str += "] "
str += strings.ReplaceAll(curr.Message, "\n", " ")
if curr.Caller != "" {
str += " (@ "
str += curr.Caller
str += ")"
}
str += "\n"
if curr.Meta.Any() {
meta := indent + " {" + curr.Meta.FormatOneLine(240) + "}"
if len(meta) < 200 {
str += meta
str += "\n"
} else {
str += curr.Meta.FormatMultiLine(indent+" ", " ", 1024)
str += "\n"
}
}
}
return str
} else {
return "[?[" + ee.UniqueID + "]?]"
}
}
func (ee *ExErr) ShortLog(evt *zerolog.Event) {
ee.Meta.Apply(evt, langext.Ptr(240)).Msg(ee.FormatLog(LogPrintShort))
}
// RecursiveMessage returns the message to show
// = first error (top-down) that is not foreign/empty
// = lowest level error (that is not empty)
// = fallback to self.message
func (ee *ExErr) RecursiveMessage() string {
// ==== [1] ==== first error (top-down) that is not wrapping/foreign/empty
for curr := ee; curr != nil; curr = curr.OriginalError {
if curr.Message != "" && curr.Category != CatForeign {
return curr.Message
}
}
// ==== [2] ==== lowest level error (that is not empty)
deepestMsg := ""
for curr := ee; curr != nil; curr = curr.OriginalError {
if curr.Message != "" {
deepestMsg = curr.Message
}
}
if deepestMsg != "" {
return deepestMsg
}
// ==== [3] ==== fallback to self.message
return ee.Message
}
// RecursiveType returns the statuscode to use
// = first error (top-down) that is not wrapping/empty
func (ee *ExErr) RecursiveType() ErrorType {
for curr := ee; curr != nil; curr = curr.OriginalError {
if curr.Type != TypeWrap {
return curr.Type
}
}
// fallback to self
return ee.Type
}
// RecursiveStatuscode returns the HTTP Statuscode to use
// = first error (top-down) that has a statuscode set
func (ee *ExErr) RecursiveStatuscode() *int {
for curr := ee; curr != nil; curr = curr.OriginalError {
if curr.StatusCode != nil {
return langext.Ptr(*curr.StatusCode)
}
}
return nil
}
// RecursiveCategory returns the ErrorCategory to use
// = first error (top-down) that has a statuscode set
func (ee *ExErr) RecursiveCategory() ErrorCategory {
for curr := ee; curr != nil; curr = curr.OriginalError {
if curr.Category != CatWrap {
return curr.Category
}
}
// fallback to <empty>
return ee.Category
}
// RecursiveMeta searches (top-down) for teh first error that has a meta value with teh specified key
// and returns its value (or nil)
func (ee *ExErr) RecursiveMeta(key string) *MetaValue {
for curr := ee; curr != nil; curr = curr.OriginalError {
if metaval, ok := curr.Meta[key]; ok {
return langext.Ptr(metaval)
}
}
return nil
}
// Depth returns the depth of recursively contained errors
func (ee *ExErr) Depth() int {
if ee.OriginalError == nil {
return 1
} else {
return ee.OriginalError.Depth() + 1
}
}
// GetMeta returns the meta value with the specified key
// this method recurses through all wrapped errors and returns the first matching meta value
func (ee *ExErr) GetMeta(key string) (any, bool) {
for curr := ee; curr != nil; curr = curr.OriginalError {
if v, ok := curr.Meta[key]; ok {
return v.Value, true
}
}
return nil, false
}
// GetMetaString functions the same as GetMeta, but returns false if the type does not match
func (ee *ExErr) GetMetaString(key string) (string, bool) {
if v1, ok := ee.GetMeta(key); ok {
if v2, ok := v1.(string); ok {
return v2, true
}
}
return "", false
}
func (ee *ExErr) GetMetaBool(key string) (bool, bool) {
if v1, ok := ee.GetMeta(key); ok {
if v2, ok := v1.(bool); ok {
return v2, true
}
}
return false, false
}
func (ee *ExErr) GetMetaInt(key string) (int, bool) {
if v1, ok := ee.GetMeta(key); ok {
if v2, ok := v1.(int); ok {
return v2, true
}
}
return 0, false
}
func (ee *ExErr) GetMetaFloat32(key string) (float32, bool) {
if v1, ok := ee.GetMeta(key); ok {
if v2, ok := v1.(float32); ok {
return v2, true
}
}
return 0, false
}
func (ee *ExErr) GetMetaFloat64(key string) (float64, bool) {
if v1, ok := ee.GetMeta(key); ok {
if v2, ok := v1.(float64); ok {
return v2, true
}
}
return 0, false
}
func (ee *ExErr) GetMetaTime(key string) (time.Time, bool) {
if v1, ok := ee.GetMeta(key); ok {
if v2, ok := v1.(time.Time); ok {
return v2, true
}
}
return time.Time{}, false
}
func (ee *ExErr) GetExtra(key string) (any, bool) {
if v, ok := ee.Extra[key]; ok {
return v, true
}
return nil, false
}
func (ee *ExErr) UniqueIDs() []string {
ids := make([]string, 0, 1)
for curr := ee; curr != nil; curr = curr.OriginalError {
ids = append(ids, curr.UniqueID)
}
return ids
}
// contains test if the supplied error is contained in this error (anywhere in the chain)
func (ee *ExErr) contains(original *ExErr) (*ExErr, bool) {
if original == nil {
return nil, false
}
if ee == original {
return ee, true
}
for curr := ee; curr != nil; curr = curr.OriginalError {
if curr.equalsDirectProperties(curr) {
return curr, true
}
}
return nil, false
}
// equalsDirectProperties tests if ee and other are equals, but only looks at primary properties (not `OriginalError` or `Meta`)
func (ee *ExErr) equalsDirectProperties(other *ExErr) bool {
if ee.UniqueID != other.UniqueID {
return false
}
if ee.Timestamp != other.Timestamp {
return false
}
if ee.Category != other.Category {
return false
}
if ee.Severity != other.Severity {
return false
}
if ee.Type != other.Type {
return false
}
if ee.StatusCode != other.StatusCode {
return false
}
if ee.Message != other.Message {
return false
}
if ee.WrappedErrType != other.WrappedErrType {
return false
}
if ee.Caller != other.Caller {
return false
}
return true
}
func newID() string {
return xid.New().String()
}

102
exerr/exerr_test.go Normal file
View File

@@ -0,0 +1,102 @@
package exerr
import (
"errors"
"git.blackforestbytes.com/BlackForestBytes/goext/langext"
"git.blackforestbytes.com/BlackForestBytes/goext/tst"
"os"
"testing"
)
func TestMain(m *testing.M) {
if !Initialized() {
Init(ErrorPackageConfigInit{ZeroLogErrTraces: langext.PFalse, ZeroLogAllTraces: langext.PFalse})
}
os.Exit(m.Run())
}
type golangErr struct {
Message string
}
func (g golangErr) Error() string {
return g.Message
}
type golangErr2 struct {
Message string
}
func (g golangErr2) Error() string {
return g.Message
}
type simpleError struct {
}
func (g simpleError) Error() string {
return "Something simple went wroong"
}
type simpleError2 struct {
}
func (g simpleError2) Error() string {
return "Something simple went wroong"
}
func TestExErrIs1(t *testing.T) {
e0 := simpleError{}
wrap := Wrap(e0, "something went wrong").Str("test", "123").Build()
tst.AssertTrue(t, errors.Is(wrap, simpleError{}))
tst.AssertFalse(t, errors.Is(wrap, golangErr{}))
tst.AssertFalse(t, errors.Is(wrap, golangErr{"error1"}))
}
func TestExErrIs2(t *testing.T) {
e0 := golangErr{"error1"}
wrap := Wrap(e0, "something went wrong").Str("test", "123").Build()
tst.AssertTrue(t, errors.Is(wrap, e0))
tst.AssertTrue(t, errors.Is(wrap, golangErr{"error1"}))
tst.AssertFalse(t, errors.Is(wrap, golangErr{"error2"}))
tst.AssertFalse(t, errors.Is(wrap, simpleError{}))
}
func TestExErrAs(t *testing.T) {
e0 := golangErr{"error1"}
w0 := Wrap(e0, "something went wrong").Str("test", "123").Build()
{
out := golangErr{}
ok := errors.As(w0, &out)
tst.AssertTrue(t, ok)
tst.AssertEqual(t, out.Message, "error1")
}
w1 := Wrap(w0, "outher error").Build()
{
out := golangErr{}
ok := errors.As(w1, &out)
tst.AssertTrue(t, ok)
tst.AssertEqual(t, out.Message, "error1")
}
{
out := golangErr2{}
ok := errors.As(w1, &out)
tst.AssertFalse(t, ok)
}
{
out := simpleError2{}
ok := errors.As(w1, &out)
tst.AssertFalse(t, ok)
}
}

145
exerr/gin.go Normal file
View File

@@ -0,0 +1,145 @@
package exerr
import (
"github.com/gin-gonic/gin"
json "git.blackforestbytes.com/BlackForestBytes/goext/gojson"
"git.blackforestbytes.com/BlackForestBytes/goext/langext"
"net/http"
"time"
)
func (ee *ExErr) toJson(depth int, applyExtendListener bool, outputMeta bool) langext.H {
ginJson := langext.H{}
if ee.UniqueID != "" {
ginJson["id"] = ee.UniqueID
}
if ee.Category != CatWrap {
ginJson["category"] = ee.Category.Category
}
if ee.Type != TypeWrap {
ginJson["type"] = ee.Type.Key
}
if ee.StatusCode != nil {
ginJson["statuscode"] = ee.StatusCode
}
if ee.Message != "" {
ginJson["message"] = ee.Message
}
if ee.Caller != "" {
ginJson["caller"] = ee.Caller
}
if ee.Severity != SevErr {
ginJson["severity"] = ee.Severity.Severity
}
if ee.Timestamp != (time.Time{}) {
ginJson["time"] = ee.Timestamp.Format(time.RFC3339)
}
if ee.WrappedErrType != "" {
ginJson["wrappedErrType"] = ee.WrappedErrType
}
if ee.OriginalError != nil {
ginJson["original"] = ee.OriginalError.toJson(depth+1, applyExtendListener, outputMeta)
}
if outputMeta {
metaJson := langext.H{}
for metaKey, metaVal := range ee.Meta {
metaJson[metaKey] = metaVal.rawValueForJson()
}
ginJson["meta"] = metaJson
extraJson := langext.H{}
for extraKey, extraVal := range ee.Extra {
extraJson[extraKey] = extraVal
}
ginJson["extra"] = extraJson
}
if applyExtendListener {
pkgconfig.ExtendGinDataOutput(ee, depth, ginJson)
}
return ginJson
}
func (ee *ExErr) ToDefaultAPIJson() (string, error) {
gjr := json.GoJsonRender{Data: ee.ToAPIJson(true, pkgconfig.ExtendedGinOutput, pkgconfig.IncludeMetaInGinOutput), NilSafeSlices: true, NilSafeMaps: true}
r, err := gjr.RenderString()
if err != nil {
return "", err
}
return r, nil
}
// ToAPIJson converts the ExError to a json object
// (the same object as used in the Output(gin) method)
//
// Parameters:
// - [applyExtendListener]: if false the pkgconfig.ExtendGinOutput / pkgconfig.ExtendGinDataOutput will not be applied
// - [includeWrappedErrors]: if false we do not include the recursive/wrapped errors in `__data`
// - [includeMetaFields]: if true we also include meta-values (aka from `.Str(key, value).Build()`), needs includeWrappedErrors=true
func (ee *ExErr) ToAPIJson(applyExtendListener bool, includeWrappedErrors bool, includeMetaFields bool) langext.H {
apiOutput := langext.H{
"errorid": ee.UniqueID,
"message": ee.RecursiveMessage(),
"errorcode": ee.RecursiveType().Key,
"category": ee.RecursiveCategory().Category,
}
if includeWrappedErrors {
apiOutput["__data"] = ee.toJson(0, applyExtendListener, includeMetaFields)
}
for exkey, exval := range ee.Extra {
// ensure we do not override existing values
for {
if _, ok := apiOutput[exkey]; ok {
exkey = "_" + exkey
} else {
break
}
}
apiOutput[exkey] = exval
}
if applyExtendListener {
pkgconfig.ExtendGinOutput(ee, apiOutput)
}
return apiOutput
}
func (ee *ExErr) Output(g *gin.Context) {
warnOnPkgConfigNotInitialized()
var statuscode = http.StatusInternalServerError
var baseCat = ee.RecursiveCategory()
var baseType = ee.RecursiveType()
var baseStatuscode = ee.RecursiveStatuscode()
if baseCat == CatUser {
statuscode = http.StatusBadRequest
} else if baseCat == CatSystem {
statuscode = http.StatusInternalServerError
}
if baseStatuscode != nil {
statuscode = *ee.StatusCode
} else if baseType.DefaultStatusCode != nil {
statuscode = *baseType.DefaultStatusCode
}
ginOutput := ee.ToAPIJson(true, pkgconfig.ExtendedGinOutput, pkgconfig.IncludeMetaInGinOutput)
g.Render(statuscode, json.GoJsonRender{Data: ginOutput, NilSafeSlices: true, NilSafeMaps: true})
}

126
exerr/helper.go Normal file
View File

@@ -0,0 +1,126 @@
package exerr
import "fmt"
// IsType test if the supplied error is of the specified ErrorType.
func IsType(err error, errType ErrorType) bool {
if err == nil {
return false
}
bmerr := FromError(err)
for bmerr != nil {
if bmerr.Type == errType {
return true
}
bmerr = bmerr.OriginalError
}
return false
}
// IsFrom test if the supplied error stems originally from original
func IsFrom(e error, original error) bool {
if e == nil {
return false
}
//goland:noinspection GoDirectComparisonOfErrors
if e == original {
return true
}
bmerr := FromError(e)
for bmerr == nil {
return false
}
for curr := bmerr; curr != nil; curr = curr.OriginalError {
if curr.Category == CatForeign && curr.Message == original.Error() && curr.WrappedErrType == fmt.Sprintf("%T", original) {
return true
}
}
return false
}
// HasSourceMessage tests if the supplied error stems originally from an error with the message msg
func HasSourceMessage(e error, msg string) bool {
if e == nil {
return false
}
bmerr := FromError(e)
for bmerr == nil {
return false
}
for curr := bmerr; curr != nil; curr = curr.OriginalError {
if curr.OriginalError == nil && curr.Message == msg {
return true
}
}
return false
}
func MessageMatch(e error, matcher func(string) bool) bool {
if e == nil {
return false
}
if matcher(e.Error()) {
return true
}
bmerr := FromError(e)
for bmerr == nil {
return false
}
for curr := bmerr; curr != nil; curr = curr.OriginalError {
if matcher(curr.Message) {
return true
}
}
return false
}
// OriginalError returns the lowest level error, probably the original/external error that was originally wrapped
func OriginalError(e error) error {
if e == nil {
return nil
}
//goland:noinspection GoTypeAssertionOnErrors
bmerr, ok := e.(*ExErr)
for !ok {
return e
}
for bmerr.OriginalError != nil {
bmerr = bmerr.OriginalError
}
if bmerr.WrappedErr != nil {
if werr, ok := bmerr.WrappedErr.(error); ok {
return werr
}
}
return bmerr
}
func UniqueID(v error) *string {
if v == nil {
return nil
}
//goland:noinspection GoTypeAssertionOnErrors
if verr, ok := v.(*ExErr); ok {
return &verr.UniqueID
}
return nil
}

30
exerr/listener.go Normal file
View File

@@ -0,0 +1,30 @@
package exerr
import (
"sync"
)
type ListenerOpt struct {
NoLog bool
}
type Listener = func(method Method, v *ExErr, opt ListenerOpt)
var listenerLock = sync.Mutex{}
var listener = make([]Listener, 0)
func RegisterListener(l Listener) {
listenerLock.Lock()
defer listenerLock.Unlock()
listener = append(listener, l)
}
func (ee *ExErr) CallListener(m Method, opt ListenerOpt) {
listenerLock.Lock()
defer listenerLock.Unlock()
for _, v := range listener {
v(m, ee, opt)
}
}

762
exerr/meta.go Normal file
View File

@@ -0,0 +1,762 @@
package exerr
import (
"encoding/hex"
"encoding/json"
"errors"
"fmt"
"git.blackforestbytes.com/BlackForestBytes/goext/langext"
"github.com/rs/zerolog"
"go.mongodb.org/mongo-driver/bson"
"go.mongodb.org/mongo-driver/bson/primitive"
"math"
"strconv"
"strings"
"time"
)
// This is a buffed up map[string]any
// we also save type information of the map-values
// which allows us to deserialize them back into te correct types later
type MetaMap map[string]MetaValue
type metaDataType string
const (
MDTString metaDataType = "String"
MDTStringPtr metaDataType = "StringPtr"
MDTInt metaDataType = "Int"
MDTInt8 metaDataType = "Int8"
MDTInt16 metaDataType = "Int16"
MDTInt32 metaDataType = "Int32"
MDTInt64 metaDataType = "Int64"
MDTFloat32 metaDataType = "Float32"
MDTFloat64 metaDataType = "Float64"
MDTBool metaDataType = "Bool"
MDTBytes metaDataType = "Bytes"
MDTObjectID metaDataType = "ObjectID"
MDTTime metaDataType = "Time"
MDTDuration metaDataType = "Duration"
MDTStringArray metaDataType = "StringArr"
MDTIntArray metaDataType = "IntArr"
MDTInt32Array metaDataType = "Int32Arr"
MDTID metaDataType = "ID"
MDTAny metaDataType = "Interface"
MDTNil metaDataType = "Nil"
MDTEnum metaDataType = "Enum"
)
type MetaValue struct {
DataType metaDataType `json:"dataType"`
Value interface{} `json:"value"`
}
type metaValueSerialization struct {
DataType metaDataType `bson:"dataType"`
Value string `bson:"value"`
Raw interface{} `bson:"raw"`
}
func (v MetaValue) SerializeValue() (string, error) {
switch v.DataType {
case MDTString:
return v.Value.(string), nil
case MDTID:
return v.Value.(IDWrap).Serialize(), nil
case MDTAny:
return v.Value.(AnyWrap).Serialize(), nil
case MDTStringPtr:
if langext.IsNil(v.Value) {
return "#", nil
}
r := v.Value.(*string)
if r != nil {
return "*" + *r, nil
} else {
return "#", nil
}
case MDTInt:
return strconv.Itoa(v.Value.(int)), nil
case MDTInt8:
return strconv.FormatInt(int64(v.Value.(int8)), 10), nil
case MDTInt16:
return strconv.FormatInt(int64(v.Value.(int16)), 10), nil
case MDTInt32:
return strconv.FormatInt(int64(v.Value.(int32)), 10), nil
case MDTInt64:
return strconv.FormatInt(v.Value.(int64), 10), nil
case MDTFloat32:
return strconv.FormatFloat(float64(v.Value.(float32)), 'X', -1, 32), nil
case MDTFloat64:
return strconv.FormatFloat(v.Value.(float64), 'X', -1, 64), nil
case MDTBool:
if v.Value.(bool) {
return "true", nil
} else {
return "false", nil
}
case MDTBytes:
return hex.EncodeToString(v.Value.([]byte)), nil
case MDTObjectID:
return v.Value.(primitive.ObjectID).Hex(), nil
case MDTTime:
return strconv.FormatInt(v.Value.(time.Time).Unix(), 10) + "|" + strconv.FormatInt(int64(v.Value.(time.Time).Nanosecond()), 10), nil
case MDTDuration:
return v.Value.(time.Duration).String(), nil
case MDTStringArray:
if langext.IsNil(v.Value) {
return "#", nil
}
r, err := json.Marshal(v.Value.([]string))
if err != nil {
return "", err
}
return string(r), nil
case MDTIntArray:
if langext.IsNil(v.Value) {
return "#", nil
}
r, err := json.Marshal(v.Value.([]int))
if err != nil {
return "", err
}
return string(r), nil
case MDTInt32Array:
if langext.IsNil(v.Value) {
return "#", nil
}
r, err := json.Marshal(v.Value.([]int32))
if err != nil {
return "", err
}
return string(r), nil
case MDTNil:
return "", nil
case MDTEnum:
return v.Value.(EnumWrap).Serialize(), nil
}
return "", errors.New("Unknown type: " + string(v.DataType))
}
func (v MetaValue) ShortString(lim int) string {
switch v.DataType {
case MDTString:
r := strings.ReplaceAll(v.Value.(string), "\r", "")
r = strings.ReplaceAll(r, "\n", "\\n")
r = strings.ReplaceAll(r, "\t", "\\t")
return langext.StrLimit(r, lim, "...")
case MDTID:
return v.Value.(IDWrap).String()
case MDTAny:
return v.Value.(AnyWrap).String()
case MDTStringPtr:
if langext.IsNil(v.Value) {
return "<<null>>"
}
r := langext.CoalesceString(v.Value.(*string), "<<null>>")
r = strings.ReplaceAll(r, "\r", "")
r = strings.ReplaceAll(r, "\n", "\\n")
r = strings.ReplaceAll(r, "\t", "\\t")
return langext.StrLimit(r, lim, "...")
case MDTInt:
return strconv.Itoa(v.Value.(int))
case MDTInt8:
return strconv.FormatInt(int64(v.Value.(int8)), 10)
case MDTInt16:
return strconv.FormatInt(int64(v.Value.(int16)), 10)
case MDTInt32:
return strconv.FormatInt(int64(v.Value.(int32)), 10)
case MDTInt64:
return strconv.FormatInt(v.Value.(int64), 10)
case MDTFloat32:
return strconv.FormatFloat(float64(v.Value.(float32)), 'g', 4, 32)
case MDTFloat64:
return strconv.FormatFloat(v.Value.(float64), 'g', 4, 64)
case MDTBool:
return fmt.Sprintf("%v", v.Value.(bool))
case MDTBytes:
return langext.StrLimit(hex.EncodeToString(v.Value.([]byte)), lim, "...")
case MDTObjectID:
return v.Value.(primitive.ObjectID).Hex()
case MDTTime:
return v.Value.(time.Time).Format(time.RFC3339)
case MDTDuration:
return v.Value.(time.Duration).String()
case MDTStringArray:
if langext.IsNil(v.Value) {
return "<<null>>"
}
r, err := json.Marshal(v.Value.([]string))
if err != nil {
return "(err)"
}
return langext.StrLimit(string(r), lim, "...")
case MDTIntArray:
if langext.IsNil(v.Value) {
return "<<null>>"
}
r, err := json.Marshal(v.Value.([]int))
if err != nil {
return "(err)"
}
return langext.StrLimit(string(r), lim, "...")
case MDTInt32Array:
if langext.IsNil(v.Value) {
return "<<null>>"
}
r, err := json.Marshal(v.Value.([]int32))
if err != nil {
return "(err)"
}
return langext.StrLimit(string(r), lim, "...")
case MDTNil:
return "<<null>>"
case MDTEnum:
return v.Value.(EnumWrap).String()
}
return "(err)"
}
func (v MetaValue) Apply(key string, evt *zerolog.Event, limitLen *int) *zerolog.Event {
switch v.DataType {
case MDTString:
if limitLen == nil {
return evt.Str(key, v.Value.(string))
} else {
return evt.Str(key, langext.StrLimit(v.Value.(string), *limitLen, "..."))
}
case MDTID:
return evt.Str(key, v.Value.(IDWrap).Value)
case MDTAny:
if v.Value.(AnyWrap).IsError {
return evt.Str(key, "(err)")
} else {
if limitLen == nil {
return evt.Str(key, v.Value.(AnyWrap).Json)
} else {
return evt.Str(key, langext.StrLimit(v.Value.(AnyWrap).Json, *limitLen, "..."))
}
}
case MDTStringPtr:
if langext.IsNil(v.Value) {
return evt.Str(key, "<<null>>")
}
if limitLen == nil {
return evt.Str(key, langext.CoalesceString(v.Value.(*string), "<<null>>"))
} else {
return evt.Str(key, langext.StrLimit(langext.CoalesceString(v.Value.(*string), "<<null>>"), *limitLen, "..."))
}
case MDTInt:
return evt.Int(key, v.Value.(int))
case MDTInt8:
return evt.Int8(key, v.Value.(int8))
case MDTInt16:
return evt.Int16(key, v.Value.(int16))
case MDTInt32:
return evt.Int32(key, v.Value.(int32))
case MDTInt64:
return evt.Int64(key, v.Value.(int64))
case MDTFloat32:
return evt.Float32(key, v.Value.(float32))
case MDTFloat64:
return evt.Float64(key, v.Value.(float64))
case MDTBool:
return evt.Bool(key, v.Value.(bool))
case MDTBytes:
return evt.Bytes(key, v.Value.([]byte))
case MDTObjectID:
return evt.Str(key, v.Value.(primitive.ObjectID).Hex())
case MDTTime:
return evt.Time(key, v.Value.(time.Time))
case MDTDuration:
return evt.Dur(key, v.Value.(time.Duration))
case MDTStringArray:
if langext.IsNil(v.Value) {
return evt.Strs(key, nil)
}
return evt.Strs(key, v.Value.([]string))
case MDTIntArray:
if langext.IsNil(v.Value) {
return evt.Ints(key, nil)
}
return evt.Ints(key, v.Value.([]int))
case MDTInt32Array:
if langext.IsNil(v.Value) {
return evt.Ints32(key, nil)
}
return evt.Ints32(key, v.Value.([]int32))
case MDTNil:
return evt.Str(key, "<<null>>")
case MDTEnum:
if v.Value.(EnumWrap).IsNil {
return evt.Any(key, nil)
} else if v.Value.(EnumWrap).ValueRaw != nil {
return evt.Any(key, v.Value.(EnumWrap).ValueRaw)
} else {
return evt.Str(key, v.Value.(EnumWrap).ValueString)
}
}
return evt.Str(key, "(err)")
}
func (v MetaValue) MarshalJSON() ([]byte, error) {
str, err := v.SerializeValue()
if err != nil {
return nil, err
}
return json.Marshal(string(v.DataType) + ":" + str)
}
func (v *MetaValue) UnmarshalJSON(data []byte) error {
var str = ""
err := json.Unmarshal(data, &str)
if err != nil {
return err
}
split := strings.SplitN(str, ":", 2)
if len(split) != 2 {
return errors.New("failed to decode MetaValue: '" + str + "'")
}
return v.Deserialize(split[1], metaDataType(split[0]))
}
func (v MetaValue) MarshalBSON() ([]byte, error) {
serval, err := v.SerializeValue()
if err != nil {
return nil, Wrap(err, "failed to bson-marshal MetaValue (serialize)").Build()
}
// this is an kinda ugly hack - but serialization to mongodb and back can loose the correct type information....
bin, err := bson.Marshal(metaValueSerialization{
DataType: v.DataType,
Value: serval,
Raw: v.Value,
})
if err != nil {
return nil, Wrap(err, "failed to bson-marshal MetaValue (marshal)").Build()
}
return bin, nil
}
func (v *MetaValue) UnmarshalBSON(bytes []byte) error {
var serval metaValueSerialization
err := bson.Unmarshal(bytes, &serval)
if err != nil {
return Wrap(err, "failed to bson-unmarshal MetaValue (unmarshal)").Build()
}
err = v.Deserialize(serval.Value, serval.DataType)
if err != nil {
return Wrap(err, "failed to deserialize MetaValue from bson").Str("raw", serval.Value).Build()
}
return nil
}
func (v *MetaValue) Deserialize(value string, datatype metaDataType) error {
switch datatype {
case MDTString:
v.Value = value
v.DataType = datatype
return nil
case MDTID:
v.Value = deserializeIDWrap(value)
v.DataType = datatype
return nil
case MDTAny:
v.Value = deserializeAnyWrap(value)
v.DataType = datatype
return nil
case MDTStringPtr:
if len(value) <= 0 || (value[0] != '*' && value[0] != '#') {
return errors.New("Invalid StringPtr: " + value)
} else if value == "#" {
v.Value = nil
v.DataType = datatype
return nil
} else {
v.Value = langext.Ptr(value[1:])
v.DataType = datatype
return nil
}
case MDTInt:
pv, err := strconv.ParseInt(value, 10, 0)
if err != nil {
return err
}
v.Value = int(pv)
v.DataType = datatype
return nil
case MDTInt8:
pv, err := strconv.ParseInt(value, 10, 8)
if err != nil {
return err
}
v.Value = int8(pv)
v.DataType = datatype
return nil
case MDTInt16:
pv, err := strconv.ParseInt(value, 10, 16)
if err != nil {
return err
}
v.Value = int16(pv)
v.DataType = datatype
return nil
case MDTInt32:
pv, err := strconv.ParseInt(value, 10, 32)
if err != nil {
return err
}
v.Value = int32(pv)
v.DataType = datatype
return nil
case MDTInt64:
pv, err := strconv.ParseInt(value, 10, 64)
if err != nil {
return err
}
v.Value = pv
v.DataType = datatype
return nil
case MDTFloat32:
pv, err := strconv.ParseFloat(value, 64)
if err != nil {
return err
}
v.Value = float32(pv)
v.DataType = datatype
return nil
case MDTFloat64:
pv, err := strconv.ParseFloat(value, 64)
if err != nil {
return err
}
v.Value = pv
v.DataType = datatype
return nil
case MDTBool:
if value == "true" {
v.Value = true
v.DataType = datatype
return nil
}
if value == "false" {
v.Value = false
v.DataType = datatype
return nil
}
return errors.New("invalid bool value: " + value)
case MDTBytes:
r, err := hex.DecodeString(value)
if err != nil {
return err
}
v.Value = r
v.DataType = datatype
return nil
case MDTObjectID:
r, err := primitive.ObjectIDFromHex(value)
if err != nil {
return err
}
v.Value = r
v.DataType = datatype
return nil
case MDTTime:
ps := strings.Split(value, "|")
if len(ps) != 2 {
return errors.New("invalid time.time: " + value)
}
p1, err := strconv.ParseInt(ps[0], 10, 64)
if err != nil {
return err
}
p2, err := strconv.ParseInt(ps[1], 10, 32)
if err != nil {
return err
}
v.Value = time.Unix(p1, p2)
v.DataType = datatype
return nil
case MDTDuration:
r, err := time.ParseDuration(value)
if err != nil {
return err
}
v.Value = r
v.DataType = datatype
return nil
case MDTStringArray:
if value == "#" {
v.Value = nil
v.DataType = datatype
return nil
}
pj := make([]string, 0)
err := json.Unmarshal([]byte(value), &pj)
if err != nil {
return err
}
v.Value = pj
v.DataType = datatype
return nil
case MDTIntArray:
if value == "#" {
v.Value = nil
v.DataType = datatype
return nil
}
pj := make([]int, 0)
err := json.Unmarshal([]byte(value), &pj)
if err != nil {
return err
}
v.Value = pj
v.DataType = datatype
return nil
case MDTInt32Array:
if value == "#" {
v.Value = nil
v.DataType = datatype
return nil
}
pj := make([]int32, 0)
err := json.Unmarshal([]byte(value), &pj)
if err != nil {
return err
}
v.Value = pj
v.DataType = datatype
return nil
case MDTNil:
v.Value = nil
v.DataType = datatype
return nil
case MDTEnum:
v.Value = deserializeEnumWrap(value)
v.DataType = datatype
return nil
}
return errors.New("Unknown type: " + string(datatype))
}
func (v MetaValue) ValueString() string {
switch v.DataType {
case MDTString:
return v.Value.(string)
case MDTID:
return v.Value.(IDWrap).String()
case MDTAny:
return v.Value.(AnyWrap).String()
case MDTStringPtr:
if langext.IsNil(v.Value) {
return "<<null>>"
}
return langext.CoalesceString(v.Value.(*string), "<<null>>")
case MDTInt:
return strconv.Itoa(v.Value.(int))
case MDTInt8:
return strconv.FormatInt(int64(v.Value.(int8)), 10)
case MDTInt16:
return strconv.FormatInt(int64(v.Value.(int16)), 10)
case MDTInt32:
return strconv.FormatInt(int64(v.Value.(int32)), 10)
case MDTInt64:
return strconv.FormatInt(v.Value.(int64), 10)
case MDTFloat32:
return strconv.FormatFloat(float64(v.Value.(float32)), 'g', 4, 32)
case MDTFloat64:
return strconv.FormatFloat(v.Value.(float64), 'g', 4, 64)
case MDTBool:
return fmt.Sprintf("%v", v.Value.(bool))
case MDTBytes:
return hex.EncodeToString(v.Value.([]byte))
case MDTObjectID:
return v.Value.(primitive.ObjectID).Hex()
case MDTTime:
return v.Value.(time.Time).Format(time.RFC3339Nano)
case MDTDuration:
return v.Value.(time.Duration).String()
case MDTStringArray:
if langext.IsNil(v.Value) {
return "<<null>>"
}
r, err := json.MarshalIndent(v.Value.([]string), "", " ")
if err != nil {
return "(err)"
}
return string(r)
case MDTIntArray:
if langext.IsNil(v.Value) {
return "<<null>>"
}
r, err := json.MarshalIndent(v.Value.([]int), "", " ")
if err != nil {
return "(err)"
}
return string(r)
case MDTInt32Array:
if langext.IsNil(v.Value) {
return "<<null>>"
}
r, err := json.MarshalIndent(v.Value.([]int32), "", " ")
if err != nil {
return "(err)"
}
return string(r)
case MDTNil:
return "<<null>>"
case MDTEnum:
return v.Value.(EnumWrap).String()
}
return "(err)"
}
// rawValueForJson returns most-of-the-time the `Value` field
// but for some datatyes we do special processing
// all, so we can pluck the output value in json.Marshal without any suprises
func (v MetaValue) rawValueForJson() any {
if v.DataType == MDTAny {
if v.Value.(AnyWrap).IsNil {
return nil
}
if v.Value.(AnyWrap).IsError {
return bson.M{"@error": true}
}
jsonobj := primitive.M{}
jsonarr := primitive.A{}
if err := json.Unmarshal([]byte(v.Value.(AnyWrap).Json), &jsonobj); err == nil {
return jsonobj
} else if err := json.Unmarshal([]byte(v.Value.(AnyWrap).Json), &jsonarr); err == nil {
return jsonarr
} else {
return bson.M{"type": v.Value.(AnyWrap).Type, "data": v.Value.(AnyWrap).Json}
}
}
if v.DataType == MDTID {
if v.Value.(IDWrap).IsNil {
return nil
}
return v.Value.(IDWrap).Value
}
if v.DataType == MDTBytes {
return hex.EncodeToString(v.Value.([]byte))
}
if v.DataType == MDTDuration {
return v.Value.(time.Duration).String()
}
if v.DataType == MDTTime {
return v.Value.(time.Time).Format(time.RFC3339Nano)
}
if v.DataType == MDTObjectID {
return v.Value.(primitive.ObjectID).Hex()
}
if v.DataType == MDTNil {
return nil
}
if v.DataType == MDTEnum {
if v.Value.(EnumWrap).IsNil {
return nil
}
if v.Value.(EnumWrap).ValueRaw != nil {
return v.Value.(EnumWrap).ValueRaw
}
return v.Value.(EnumWrap).ValueString
}
if v.DataType == MDTFloat32 {
if math.IsNaN(float64(v.Value.(float32))) {
return "float64::NaN"
} else if math.IsInf(float64(v.Value.(float32)), +1) {
return "float64::+inf"
} else if math.IsInf(float64(v.Value.(float32)), -1) {
return "float64::-inf"
} else {
return v.Value
}
}
if v.DataType == MDTFloat64 {
if math.IsNaN(v.Value.(float64)) {
return "float64::NaN"
} else if math.IsInf(v.Value.(float64), +1) {
return "float64::+inf"
} else if math.IsInf(v.Value.(float64), -1) {
return "float64::-inf"
} else {
return v.Value
}
}
return v.Value
}
func (mm MetaMap) FormatOneLine(singleMaxLen int) string {
r := ""
i := 0
for key, val := range mm {
if i > 0 {
r += ", "
}
r += "\"" + key + "\""
r += ": "
r += "\"" + val.ShortString(singleMaxLen) + "\""
i++
}
return r
}
func (mm MetaMap) FormatMultiLine(indentFront string, indentKeys string, maxLenValue int) string {
r := ""
r += indentFront + "{" + "\n"
for key, val := range mm {
if key == "gin.body" {
continue
}
if key == "gin_body" {
continue
}
r += indentFront
r += indentKeys
r += "\"" + key + "\""
r += ": "
r += "\"" + val.ShortString(maxLenValue) + "\""
r += ",\n"
}
r += indentFront + "}"
return r
}
func (mm MetaMap) Any() bool {
return len(mm) > 0
}
func (mm MetaMap) Apply(evt *zerolog.Event, limitLen *int) *zerolog.Event {
for key, val := range mm {
evt = val.Apply(key, evt, limitLen)
}
return evt
}
func (mm MetaMap) add(key string, mdtype metaDataType, val interface{}) {
if _, ok := mm[key]; !ok {
mm[key] = MetaValue{DataType: mdtype, Value: val}
return
}
for i := 2; ; i++ {
realkey := key + "-" + strconv.Itoa(i)
if _, ok := mm[realkey]; !ok {
mm[realkey] = MetaValue{DataType: mdtype, Value: val}
return
}
}
}

13
exerr/proxy.go Normal file
View File

@@ -0,0 +1,13 @@
package exerr
type Proxy struct {
v ExErr
}
func (p *Proxy) UniqueID() string {
return p.v.UniqueID
}
func (p *Proxy) Get() ExErr {
return p.v
}

14
exerr/stacktrace.go Normal file
View File

@@ -0,0 +1,14 @@
package exerr
import (
"fmt"
"runtime"
)
func callername(skip int) string {
pc := make([]uintptr, 15)
n := runtime.Callers(skip+2, pc)
frames := runtime.CallersFrames(pc[:n])
frame, _ := frames.Next()
return fmt.Sprintf("%s:%d %s", frame.File, frame.Line, frame.Function)
}

189
exerr/typeWrapper.go Normal file
View File

@@ -0,0 +1,189 @@
package exerr
import (
"encoding/json"
"fmt"
"github.com/rs/zerolog/log"
"git.blackforestbytes.com/BlackForestBytes/goext/enums"
"git.blackforestbytes.com/BlackForestBytes/goext/langext"
"strings"
)
//
// These are wrapper objects, because for some metadata-types we need to serialize a bit more complex data
// (eg thy actual type for ID objects, or the json representation for any types)
//
type IDWrap struct {
Type string
Value string
IsNil bool
}
func newIDWrap(val fmt.Stringer) IDWrap {
t := fmt.Sprintf("%T", val)
arr := strings.Split(t, ".")
if len(arr) > 0 {
t = arr[len(arr)-1]
}
if langext.IsNil(val) {
return IDWrap{Type: t, Value: "", IsNil: true}
}
v := val.String()
return IDWrap{Type: t, Value: v, IsNil: false}
}
func (w IDWrap) Serialize() string {
if w.IsNil {
return "!nil" + ":" + w.Type
}
return w.Type + ":" + w.Value
}
func (w IDWrap) String() string {
if w.IsNil {
return w.Type + "<<nil>>"
}
return w.Type + "(" + w.Value + ")"
}
func deserializeIDWrap(v string) IDWrap {
r := strings.SplitN(v, ":", 2)
if len(r) == 2 && r[0] == "!nil" {
return IDWrap{Type: r[1], Value: v, IsNil: true}
}
if len(r) == 0 {
return IDWrap{}
} else if len(r) == 1 {
return IDWrap{Type: "", Value: v, IsNil: false}
} else {
return IDWrap{Type: r[0], Value: r[1], IsNil: false}
}
}
type AnyWrap struct {
Type string
Json string
IsError bool
IsNil bool
}
func newAnyWrap(val any) (result AnyWrap) {
result = AnyWrap{Type: "", Json: "", IsError: true, IsNil: false} // ensure a return in case of recover()
defer func() {
if err := recover(); err != nil {
// send error should never crash our program
log.Error().Interface("err", err).Msg("Panic while trying to marshal anywrap ( bmerror.Interface )")
}
}()
t := fmt.Sprintf("%T", val)
if langext.IsNil(val) {
return AnyWrap{Type: t, Json: "", IsError: false, IsNil: true}
}
j, err := json.Marshal(val)
if err == nil {
return AnyWrap{Type: t, Json: string(j), IsError: false, IsNil: false}
} else {
return AnyWrap{Type: t, Json: "", IsError: true, IsNil: false}
}
}
func (w AnyWrap) Serialize() string {
if w.IsError {
return "ERR" + ":" + w.Type + ":" + w.Json
} else if w.IsNil {
return "NIL" + ":" + w.Type + ":" + w.Json
} else {
return "OK" + ":" + w.Type + ":" + w.Json
}
}
func (w AnyWrap) String() string {
if w.IsError {
return "(error)"
} else if w.IsNil {
return "(nil)"
} else {
return w.Json
}
}
func deserializeAnyWrap(v string) AnyWrap {
r := strings.SplitN(v, ":", 3)
if len(r) != 3 {
return AnyWrap{IsError: true, Type: "", Json: "", IsNil: false}
} else {
if r[0] == "OK" {
return AnyWrap{IsError: false, Type: r[1], Json: r[2], IsNil: false}
} else if r[0] == "ERR" {
return AnyWrap{IsError: true, Type: r[1], Json: r[2], IsNil: false}
} else if r[0] == "NIL" {
return AnyWrap{IsError: false, Type: r[1], Json: "", IsNil: true}
} else {
return AnyWrap{IsError: true, Type: "", Json: "", IsNil: false}
}
}
}
type EnumWrap struct {
Type string
ValueString string
ValueRaw enums.Enum // `ValueRaw` is lost during serialization roundtrip
IsNil bool
}
func newEnumWrap(val enums.Enum) EnumWrap {
t := fmt.Sprintf("%T", val)
arr := strings.Split(t, ".")
if len(arr) > 0 {
t = arr[len(arr)-1]
}
if langext.IsNil(val) {
return EnumWrap{Type: t, ValueString: "", ValueRaw: val, IsNil: true}
}
if enumstr, ok := val.(enums.StringEnum); ok {
return EnumWrap{Type: t, ValueString: enumstr.String(), ValueRaw: val, IsNil: false}
}
return EnumWrap{Type: t, ValueString: fmt.Sprintf("%v", val), ValueRaw: val, IsNil: false}
}
func (w EnumWrap) Serialize() string {
if w.IsNil {
return "!nil" + ":" + w.Type
}
return w.Type + ":" + w.ValueString
}
func (w EnumWrap) String() string {
if w.IsNil {
return w.Type + "<<nil>>"
}
return "[" + w.Type + "] " + w.ValueString
}
func deserializeEnumWrap(v string) EnumWrap {
r := strings.SplitN(v, ":", 2)
if len(r) == 2 && r[0] == "!nil" {
return EnumWrap{Type: r[1], ValueString: v, ValueRaw: nil, IsNil: true}
}
if len(r) == 0 {
return EnumWrap{}
} else if len(r) == 1 {
return EnumWrap{Type: "", ValueString: v, ValueRaw: nil, IsNil: false}
} else {
return EnumWrap{Type: r[0], ValueString: r[1], ValueRaw: nil, IsNil: false}
}
}

36
fsext/exists.go Normal file
View File

@@ -0,0 +1,36 @@
package fsext
import "os"
func PathExists(fp string) (bool, error) {
_, err := os.Stat(fp)
if err == nil {
return true, nil
}
if os.IsNotExist(err) {
return false, nil
}
return false, err
}
func FileExists(fp string) (bool, error) {
stat, err := os.Stat(fp)
if err == nil {
return !stat.IsDir(), nil
}
if os.IsNotExist(err) {
return false, nil
}
return false, err
}
func DirectoryExists(fp string) (bool, error) {
stat, err := os.Stat(fp)
if err == nil {
return stat.IsDir(), nil
}
if os.IsNotExist(err) {
return false, nil
}
return false, err
}

68
ginext/appContext.go Normal file
View File

@@ -0,0 +1,68 @@
package ginext
import (
"context"
"github.com/gin-gonic/gin"
"time"
)
type AppContext struct {
inner context.Context
cancelFunc context.CancelFunc
cancelled bool
GinContext *gin.Context
}
func CreateAppContext(g *gin.Context, innerCtx context.Context, cancelFn context.CancelFunc) *AppContext {
for key, value := range g.Keys {
innerCtx = context.WithValue(innerCtx, key, value)
}
return &AppContext{
inner: innerCtx,
cancelFunc: cancelFn,
cancelled: false,
GinContext: g,
}
}
func CreateBackgroundAppContext() *AppContext {
return &AppContext{
inner: context.Background(),
cancelFunc: nil,
cancelled: false,
GinContext: nil,
}
}
func (ac *AppContext) Deadline() (deadline time.Time, ok bool) {
return ac.inner.Deadline()
}
func (ac *AppContext) Done() <-chan struct{} {
return ac.inner.Done()
}
func (ac *AppContext) Err() error {
return ac.inner.Err()
}
func (ac *AppContext) Value(key any) any {
return ac.inner.Value(key)
}
func (ac *AppContext) Set(key, value any) {
ac.inner = context.WithValue(ac.inner, key, value)
}
func (ac *AppContext) Cancel() {
ac.cancelled = true
ac.cancelFunc()
}
func (ac *AppContext) RequestURI() string {
if ac.GinContext != nil && ac.GinContext.Request != nil {
return ac.GinContext.Request.Method + " :: " + ac.GinContext.Request.RequestURI
} else {
return ""
}
}

23
ginext/commonHandler.go Normal file
View File

@@ -0,0 +1,23 @@
package ginext
import (
"net/http"
)
func RedirectFound(newuri string) WHandlerFunc {
return func(pctx PreContext) HTTPResponse {
return Redirect(http.StatusFound, newuri)
}
}
func RedirectTemporary(newuri string) WHandlerFunc {
return func(pctx PreContext) HTTPResponse {
return Redirect(http.StatusTemporaryRedirect, newuri)
}
}
func RedirectPermanent(newuri string) WHandlerFunc {
return func(pctx PreContext) HTTPResponse {
return Redirect(http.StatusPermanentRedirect, newuri)
}
}

View File

@@ -0,0 +1,12 @@
package ginext
import (
"github.com/gin-gonic/gin"
"git.blackforestbytes.com/BlackForestBytes/goext/dataext"
)
func BodyBuffer(g *gin.Context) {
if g.Request.Body != nil {
g.Request.Body = dataext.NewBufferedReadCloser(g.Request.Body)
}
}

25
ginext/cors.go Normal file
View File

@@ -0,0 +1,25 @@
package ginext
import (
"github.com/gin-gonic/gin"
"net/http"
"strings"
)
func CorsMiddleware(allowheader []string, exposeheader []string) gin.HandlerFunc {
return func(c *gin.Context) {
c.Writer.Header().Set("Access-Control-Allow-Origin", "*")
c.Writer.Header().Set("Access-Control-Allow-Credentials", "true")
c.Writer.Header().Set("Access-Control-Allow-Headers", strings.Join(allowheader, ", "))
if len(exposeheader) > 0 {
c.Writer.Header().Set("Access-Control-Expose-Headers", strings.Join(exposeheader, ", "))
}
c.Writer.Header().Set("Access-Control-Allow-Methods", "OPTIONS, GET, POST, PUT, PATCH, DELETE, COUNT")
if c.Request.Method == "OPTIONS" {
c.AbortWithStatus(http.StatusOK)
} else {
c.Next()
}
}
}

238
ginext/engine.go Normal file
View File

@@ -0,0 +1,238 @@
package ginext
import (
"fmt"
"github.com/gin-gonic/gin"
"github.com/rs/zerolog/log"
"git.blackforestbytes.com/BlackForestBytes/goext/langext"
"git.blackforestbytes.com/BlackForestBytes/goext/mathext"
"git.blackforestbytes.com/BlackForestBytes/goext/rext"
"net"
"net/http"
"net/http/httptest"
"regexp"
"strings"
"time"
)
type GinWrapper struct {
engine *gin.Engine
suppressGinLogs bool
opt Options
allowCors bool
corsAllowHeader []string
corsExposeHeader []string
ginDebug bool
bufferBody bool
requestTimeout time.Duration
listenerBeforeRequest []func(g *gin.Context)
listenerAfterRequest []func(g *gin.Context, resp HTTPResponse)
buildRequestBindError func(g *gin.Context, fieldtype string, err error) HTTPResponse
routeSpecs []ginRouteSpec
}
type ginRouteSpec struct {
Method string
URL string
Middlewares []string
Handler string
}
type Options struct {
AllowCors *bool // Add cors handler to allow all CORS requests on the default http methods
CorsAllowHeader *[]string // override the default values of Access-Control-Allow-Headers (AllowCors must be true)
CorsExposeHeader *[]string // return Access-Control-Expose-Headers (AllowCors must be true)
GinDebug *bool // Set gin.debug to true (adds more logs)
SuppressGinLogs *bool // Suppress our custom gin logs (even if GinDebug == true)
BufferBody *bool // Buffers the input body stream, this way the ginext error handler can later include the whole request body
Timeout *time.Duration // The default handler timeout
ListenerBeforeRequest []func(g *gin.Context) // Register listener that are called before the handler method
ListenerAfterRequest []func(g *gin.Context, resp HTTPResponse) // Register listener that are called after the handler method
DebugTrimHandlerPrefixes []string // Trim these prefixes from the handler names in the debug print
DebugReplaceHandlerNames map[string]string // Replace handler names in debug output
BuildRequestBindError func(g *gin.Context, fieldtype string, err error) HTTPResponse // Override function which generates the HTTPResponse errors that are returned by the preContext..Start() methids
}
// NewEngine creates a new (wrapped) ginEngine
func NewEngine(opt Options) *GinWrapper {
ginDebug := langext.Coalesce(opt.GinDebug, true)
if ginDebug {
gin.SetMode(gin.DebugMode)
// do not debug-print routes
gin.DebugPrintRouteFunc = func(_, _, _ string, _ int) {}
} else {
gin.SetMode(gin.ReleaseMode)
// do not debug-print routes
gin.DebugPrintRouteFunc = func(_, _, _ string, _ int) {}
}
engine := gin.New()
wrapper := &GinWrapper{
engine: engine,
opt: opt,
suppressGinLogs: langext.Coalesce(opt.SuppressGinLogs, false),
allowCors: langext.Coalesce(opt.AllowCors, false),
corsAllowHeader: langext.Coalesce(opt.CorsAllowHeader, []string{"Content-Type", "Content-Length", "Accept-Encoding", "X-CSRF-Token", "Authorization", "accept", "origin", "Cache-Control", "X-Requested-With"}),
corsExposeHeader: langext.Coalesce(opt.CorsExposeHeader, []string{}),
ginDebug: ginDebug,
bufferBody: langext.Coalesce(opt.BufferBody, false),
requestTimeout: langext.Coalesce(opt.Timeout, 24*time.Hour),
listenerBeforeRequest: opt.ListenerBeforeRequest,
listenerAfterRequest: opt.ListenerAfterRequest,
buildRequestBindError: langext.Conditional(opt.BuildRequestBindError == nil, defaultBuildRequestBindError, opt.BuildRequestBindError),
}
engine.RedirectFixedPath = false
engine.RedirectTrailingSlash = false
if wrapper.allowCors {
engine.Use(CorsMiddleware(wrapper.corsAllowHeader, wrapper.corsExposeHeader))
}
if ginDebug && !wrapper.suppressGinLogs {
ginlogger := gin.Logger()
engine.Use(func(context *gin.Context) { ginlogger(context) })
}
return wrapper
}
func (w *GinWrapper) ListenAndServeHTTP(addr string, postInit func(port string)) (chan error, *http.Server) {
w.DebugPrintRoutes()
httpserver := &http.Server{
Addr: addr,
Handler: w.engine,
}
errChan := make(chan error)
go func() {
ln, err := net.Listen("tcp", httpserver.Addr)
if err != nil {
errChan <- err
return
}
_, port, err := net.SplitHostPort(ln.Addr().String())
if err != nil {
errChan <- err
return
}
log.Info().Str("address", httpserver.Addr).Msg("HTTP-Server started on http://localhost:" + port)
if postInit != nil {
postInit(port) // the net.Listener a few lines above is at this point actually already buffering requests
}
errChan <- httpserver.Serve(ln)
}()
return errChan, httpserver
}
func (w *GinWrapper) DebugPrintRoutes() {
if !w.ginDebug {
return
}
lines := make([][4]string, 0)
pad := [4]int{0, 0, 0, 0}
for _, spec := range w.routeSpecs {
line := [4]string{
spec.Method,
spec.URL,
strings.Join(langext.ArrMap(spec.Middlewares, w.cleanMiddlewareName), " -> "),
w.cleanMiddlewareName(spec.Handler),
}
lines = append(lines, line)
pad[0] = mathext.Max(pad[0], len(line[0]))
pad[1] = mathext.Max(pad[1], len(line[1]))
pad[2] = mathext.Max(pad[2], len(line[2]))
pad[3] = mathext.Max(pad[3], len(line[3]))
}
fmt.Printf("Gin-Routes:\n")
fmt.Printf("{\n")
for _, line := range lines {
fmt.Printf(" %s %s --> %s --> %s\n",
langext.StrPadRight("["+line[0]+"]", " ", pad[0]+2),
langext.StrPadRight(line[1], " ", pad[1]),
langext.StrPadRight(line[2], " ", pad[2]),
langext.StrPadRight(line[3], " ", pad[3]))
}
fmt.Printf("}\n")
}
func (w *GinWrapper) cleanMiddlewareName(fname string) string {
funcSuffix := rext.W(regexp.MustCompile(`\.func[0-9]+(?:\.[0-9]+)*$`))
if match, ok := funcSuffix.MatchFirst(fname); ok {
fname = fname[:len(fname)-match.FullMatch().Length()]
}
if strings.HasSuffix(fname, ".(*GinRoutesWrapper).WithJSONFilter") {
fname = "[JSONFilter]"
}
if fname == "ginext.BodyBuffer" {
fname = "[BodyBuffer]"
}
skipPrefixes := []string{"api.(*Handler).", "api.", "ginext.", "handler.", "admin-app.", "employee-app.", "employer-app."}
for _, pfx := range skipPrefixes {
if strings.HasPrefix(fname, pfx) {
fname = fname[len(pfx):]
}
}
for _, pfx := range w.opt.DebugTrimHandlerPrefixes {
if strings.HasPrefix(fname, pfx) {
fname = fname[len(pfx):]
}
}
for k, v := range langext.ForceMap(w.opt.DebugReplaceHandlerNames) {
if strings.EqualFold(fname, k) {
fname = v
}
}
return fname
}
// ServeHTTP only used for unit tests
func (w *GinWrapper) ServeHTTP(req *http.Request) *httptest.ResponseRecorder {
respRec := httptest.NewRecorder()
w.engine.ServeHTTP(respRec, req)
return respRec
}
// ForwardRequest manually inserts a request into this router
// = behaves as if the request came from the outside (and writes the response to `writer`)
func (w *GinWrapper) ForwardRequest(writer http.ResponseWriter, req *http.Request) {
w.engine.ServeHTTP(writer, req)
}
func (w *GinWrapper) ListRoutes() []gin.RouteInfo {
return w.engine.Routes()
}
func defaultBuildRequestBindError(g *gin.Context, fieldtype string, err error) HTTPResponse {
return Error(err)
}

89
ginext/funcWrapper.go Normal file
View File

@@ -0,0 +1,89 @@
package ginext
import (
"fmt"
"git.blackforestbytes.com/BlackForestBytes/goext/exerr"
"github.com/gin-gonic/gin"
"net/http"
)
type WHandlerFunc func(PreContext) HTTPResponse
func Wrap(w *GinWrapper, fn WHandlerFunc) gin.HandlerFunc {
return func(g *gin.Context) {
reqctx := g.Request.Context()
pctx := PreContext{
wrapper: w,
ginCtx: g,
persistantData: &preContextData{},
}
for _, lstr := range w.listenerBeforeRequest {
lstr(g)
}
wrap, stackTrace, panicObj := callPanicSafe(fn, pctx)
if panicObj != nil {
fmt.Printf("\n======== ======== STACKTRACE ======== ========\n%s\n======== ======== ======== ========\n\n", stackTrace)
err := exerr.
New(exerr.TypePanic, "Panic occured (in gin handler)").
Any("panicObj", panicObj).
Str("trace", stackTrace).
Build()
wrap = Error(err)
}
if g.Writer.Written() {
panic("Writing in WrapperFunc is not supported")
}
if pctx.persistantData.sessionObj != nil {
err := pctx.persistantData.sessionObj.Finish(reqctx, wrap)
if err != nil {
wrap = Error(exerr.Wrap(err, "Failed to finish session").Any("originalResponse", wrap).Build())
}
}
for _, lstr := range w.listenerAfterRequest {
lstr(g, wrap)
}
if reqctx.Err() == nil {
wrap.Write(g)
}
}
}
func WrapHTTPHandler(w *GinWrapper, fn http.Handler) gin.HandlerFunc {
return func(g *gin.Context) {
for _, lstr := range w.listenerBeforeRequest {
lstr(g)
}
fn.ServeHTTP(g.Writer, g.Request)
for _, lstr := range w.listenerAfterRequest {
lstr(g, nil)
}
}
}
func WrapHTTPHandlerFunc(w *GinWrapper, fn http.HandlerFunc) gin.HandlerFunc {
return func(g *gin.Context) {
for _, lstr := range w.listenerBeforeRequest {
lstr(g)
}
fn(g.Writer, g.Request)
for _, lstr := range w.listenerAfterRequest {
lstr(g, nil)
}
}
}

9
ginext/jsonFilter.go Normal file
View File

@@ -0,0 +1,9 @@
package ginext
import "github.com/gin-gonic/gin"
var jsonFilterKey = "goext.jsonfilter"
func SetJSONFilter(g *gin.Context, filter string) {
g.Set(jsonFilterKey, filter)
}

211
ginext/preContext.go Normal file
View File

@@ -0,0 +1,211 @@
package ginext
import (
"bytes"
"context"
"fmt"
"github.com/gin-gonic/gin"
"github.com/gin-gonic/gin/binding"
"git.blackforestbytes.com/BlackForestBytes/goext/dataext"
"git.blackforestbytes.com/BlackForestBytes/goext/exerr"
"git.blackforestbytes.com/BlackForestBytes/goext/langext"
"io"
"runtime/debug"
"time"
)
type PreContext struct {
ginCtx *gin.Context
wrapper *GinWrapper
uri any
query any
body any
rawbody *[]byte
form any
header any
timeout *time.Duration
persistantData *preContextData // must be a ptr, so that we can get the values back in out Wrap func
ignoreWrongContentType bool
}
type preContextData struct {
sessionObj SessionObject
}
func (pctx *PreContext) URI(uri any) *PreContext {
pctx.uri = uri
return pctx
}
func (pctx *PreContext) Query(query any) *PreContext {
pctx.query = query
return pctx
}
func (pctx *PreContext) Body(body any) *PreContext {
pctx.body = body
return pctx
}
func (pctx *PreContext) RawBody(rawbody *[]byte) *PreContext {
pctx.rawbody = rawbody
return pctx
}
func (pctx *PreContext) Form(form any) *PreContext {
pctx.form = form
return pctx
}
func (pctx *PreContext) Header(header any) *PreContext {
pctx.header = header
return pctx
}
func (pctx *PreContext) WithTimeout(to time.Duration) *PreContext {
pctx.timeout = &to
return pctx
}
func (pctx *PreContext) WithSession(sessionObj SessionObject) *PreContext {
pctx.persistantData.sessionObj = sessionObj
return pctx
}
func (pctx *PreContext) IgnoreWrongContentType() *PreContext {
pctx.ignoreWrongContentType = true
return pctx
}
func (pctx PreContext) Start() (*AppContext, *gin.Context, *HTTPResponse) {
if pctx.uri != nil {
if err := pctx.ginCtx.ShouldBindUri(pctx.uri); err != nil {
err = exerr.Wrap(err, "Failed to read uri").
WithType(exerr.TypeBindFailURI).
Str("struct_type", fmt.Sprintf("%T", pctx.uri)).
Build()
return CreateBackgroundAppContext(), nil, langext.Ptr(pctx.wrapper.buildRequestBindError(pctx.ginCtx, "URI", err))
}
}
if pctx.query != nil {
if err := pctx.ginCtx.ShouldBindQuery(pctx.query); err != nil {
err = exerr.Wrap(err, "Failed to read query").
WithType(exerr.TypeBindFailQuery).
Str("struct_type", fmt.Sprintf("%T", pctx.query)).
Build()
return CreateBackgroundAppContext(), nil, langext.Ptr(pctx.wrapper.buildRequestBindError(pctx.ginCtx, "QUERY", err))
}
}
if pctx.body != nil {
if pctx.ginCtx.ContentType() == "application/json" {
if brc, ok := pctx.body.(dataext.BufferedReadCloser); ok {
// Ensures a fully reset (offset=0) buffer before parsing
err := brc.Reset()
if err != nil {
err = exerr.Wrap(err, "Failed to read (brc.reset) json-body").
WithType(exerr.TypeBindFailJSON).
Str("struct_type", fmt.Sprintf("%T", pctx.body)).
Build()
return CreateBackgroundAppContext(), nil, langext.Ptr(pctx.wrapper.buildRequestBindError(pctx.ginCtx, "JSON", err))
}
}
if err := pctx.ginCtx.ShouldBindJSON(pctx.body); err != nil {
err = exerr.Wrap(err, "Failed to read json-body").
WithType(exerr.TypeBindFailJSON).
Str("struct_type", fmt.Sprintf("%T", pctx.body)).
Build()
return CreateBackgroundAppContext(), nil, langext.Ptr(pctx.wrapper.buildRequestBindError(pctx.ginCtx, "JSON", err))
}
} else {
if !pctx.ignoreWrongContentType {
err := exerr.New(exerr.TypeBindFailJSON, "missing JSON body").
Str("struct_type", fmt.Sprintf("%T", pctx.body)).
Build()
return CreateBackgroundAppContext(), nil, langext.Ptr(pctx.wrapper.buildRequestBindError(pctx.ginCtx, "JSON", err))
}
}
}
if pctx.rawbody != nil {
if brc, ok := pctx.ginCtx.Request.Body.(dataext.BufferedReadCloser); ok {
v, err := brc.BufferedAll()
if err != nil {
return CreateBackgroundAppContext(), nil, langext.Ptr(pctx.wrapper.buildRequestBindError(pctx.ginCtx, "BODY", err))
}
*pctx.rawbody = v
} else {
buf := &bytes.Buffer{}
_, err := io.Copy(buf, pctx.ginCtx.Request.Body)
if err != nil {
return CreateBackgroundAppContext(), nil, langext.Ptr(pctx.wrapper.buildRequestBindError(pctx.ginCtx, "BODY", err))
}
*pctx.rawbody = buf.Bytes()
}
}
if pctx.form != nil {
if pctx.ginCtx.ContentType() == "multipart/form-data" {
if err := pctx.ginCtx.ShouldBindWith(pctx.form, binding.Form); err != nil {
err = exerr.Wrap(err, "Failed to read multipart-form").
WithType(exerr.TypeBindFailFormData).
Str("struct_type", fmt.Sprintf("%T", pctx.form)).
Build()
return CreateBackgroundAppContext(), nil, langext.Ptr(pctx.wrapper.buildRequestBindError(pctx.ginCtx, "FORM", err))
}
} else if pctx.ginCtx.ContentType() == "application/x-www-form-urlencoded" {
if err := pctx.ginCtx.ShouldBindWith(pctx.form, binding.Form); err != nil {
err = exerr.Wrap(err, "Failed to read urlencoded-form").
WithType(exerr.TypeBindFailFormData).
Str("struct_type", fmt.Sprintf("%T", pctx.form)).
Build()
return CreateBackgroundAppContext(), nil, langext.Ptr(pctx.wrapper.buildRequestBindError(pctx.ginCtx, "FORM", err))
}
} else {
if !pctx.ignoreWrongContentType {
err := exerr.New(exerr.TypeBindFailFormData, "missing form body").
Str("struct_type", fmt.Sprintf("%T", pctx.form)).
Build()
return CreateBackgroundAppContext(), nil, langext.Ptr(pctx.wrapper.buildRequestBindError(pctx.ginCtx, "FORM", err))
}
}
}
if pctx.header != nil {
if err := pctx.ginCtx.ShouldBindHeader(pctx.header); err != nil {
err = exerr.Wrap(err, "Failed to read header").
WithType(exerr.TypeBindFailHeader).
Str("struct_type", fmt.Sprintf("%T", pctx.query)).
Build()
return CreateBackgroundAppContext(), nil, langext.Ptr(pctx.wrapper.buildRequestBindError(pctx.ginCtx, "HEADER", err))
}
}
ictx, cancel := context.WithTimeout(context.Background(), langext.Coalesce(pctx.timeout, pctx.wrapper.requestTimeout))
actx := CreateAppContext(pctx.ginCtx, ictx, cancel)
if pctx.persistantData.sessionObj != nil {
err := pctx.persistantData.sessionObj.Init(pctx.ginCtx, actx)
if err != nil {
actx.Cancel()
return CreateBackgroundAppContext(), nil, langext.Ptr(pctx.wrapper.buildRequestBindError(pctx.ginCtx, "INIT", err))
}
}
return actx, pctx.ginCtx, nil
}
func callPanicSafe(fn WHandlerFunc, pctx PreContext) (res HTTPResponse, stackTrace string, panicObj any) {
defer func() {
if rec := recover(); rec != nil {
res = nil
stackTrace = string(debug.Stack())
panicObj = rec
}
}()
res = fn(pctx)
return res, "", nil
}

47
ginext/response.go Normal file
View File

@@ -0,0 +1,47 @@
package ginext
import (
"github.com/gin-gonic/gin"
"git.blackforestbytes.com/BlackForestBytes/goext/exerr"
)
type cookieval struct {
name string
value string
maxAge int
path string
domain string
secure bool
httpOnly bool
}
type headerval struct {
Key string
Val string
}
type HTTPResponse interface {
Write(g *gin.Context)
WithHeader(k string, v string) HTTPResponse
WithCookie(name string, value string, maxAge int, path string, domain string, secure bool, httpOnly bool) HTTPResponse
IsSuccess() bool
}
type InspectableHTTPResponse interface {
HTTPResponse
Statuscode() int
BodyString(g *gin.Context) *string
ContentType() string
Headers() []string
}
type HTTPErrorResponse interface {
HTTPResponse
Error() error
}
func NotImplemented() HTTPResponse {
return Error(exerr.New(exerr.TypeNotImplemented, "").Build())
}

58
ginext/responseData.go Normal file
View File

@@ -0,0 +1,58 @@
package ginext
import (
"github.com/gin-gonic/gin"
"git.blackforestbytes.com/BlackForestBytes/goext/langext"
)
type dataHTTPResponse struct {
statusCode int
data []byte
contentType string
headers []headerval
cookies []cookieval
}
func (j dataHTTPResponse) Write(g *gin.Context) {
for _, v := range j.headers {
g.Header(v.Key, v.Val)
}
for _, v := range j.cookies {
g.SetCookie(v.name, v.value, v.maxAge, v.path, v.domain, v.secure, v.httpOnly)
}
g.Data(j.statusCode, j.contentType, j.data)
}
func (j dataHTTPResponse) WithHeader(k string, v string) HTTPResponse {
j.headers = append(j.headers, headerval{k, v})
return j
}
func (j dataHTTPResponse) WithCookie(name string, value string, maxAge int, path string, domain string, secure bool, httpOnly bool) HTTPResponse {
j.cookies = append(j.cookies, cookieval{name, value, maxAge, path, domain, secure, httpOnly})
return j
}
func (j dataHTTPResponse) IsSuccess() bool {
return j.statusCode >= 200 && j.statusCode <= 399
}
func (j dataHTTPResponse) Statuscode() int {
return j.statusCode
}
func (j dataHTTPResponse) BodyString(*gin.Context) *string {
return langext.Ptr(string(j.data))
}
func (j dataHTTPResponse) ContentType() string {
return j.contentType
}
func (j dataHTTPResponse) Headers() []string {
return langext.ArrMap(j.headers, func(v headerval) string { return v.Key + "=" + v.Val })
}
func Data(sc int, contentType string, data []byte) HTTPResponse {
return &dataHTTPResponse{statusCode: sc, contentType: contentType, data: data}
}

View File

@@ -0,0 +1,64 @@
package ginext
import (
"fmt"
"github.com/gin-gonic/gin"
"git.blackforestbytes.com/BlackForestBytes/goext/langext"
)
type downloadDataHTTPResponse struct {
statusCode int
mimetype string
data []byte
filename *string
headers []headerval
cookies []cookieval
}
func (j downloadDataHTTPResponse) Write(g *gin.Context) {
g.Header("Content-Type", j.mimetype) // if we don't set it here gin does weird file-sniffing later...
if j.filename != nil {
g.Header("Content-Disposition", fmt.Sprintf("attachment; filename=\"%s\"", *j.filename))
}
for _, v := range j.headers {
g.Header(v.Key, v.Val)
}
for _, v := range j.cookies {
g.SetCookie(v.name, v.value, v.maxAge, v.path, v.domain, v.secure, v.httpOnly)
}
g.Data(j.statusCode, j.mimetype, j.data)
}
func (j downloadDataHTTPResponse) WithHeader(k string, v string) HTTPResponse {
j.headers = append(j.headers, headerval{k, v})
return j
}
func (j downloadDataHTTPResponse) WithCookie(name string, value string, maxAge int, path string, domain string, secure bool, httpOnly bool) HTTPResponse {
j.cookies = append(j.cookies, cookieval{name, value, maxAge, path, domain, secure, httpOnly})
return j
}
func (j downloadDataHTTPResponse) IsSuccess() bool {
return j.statusCode >= 200 && j.statusCode <= 399
}
func (j downloadDataHTTPResponse) Statuscode() int {
return j.statusCode
}
func (j downloadDataHTTPResponse) BodyString(*gin.Context) *string {
return langext.Ptr(string(j.data))
}
func (j downloadDataHTTPResponse) ContentType() string {
return j.mimetype
}
func (j downloadDataHTTPResponse) Headers() []string {
return langext.ArrMap(j.headers, func(v headerval) string { return v.Key + "=" + v.Val })
}
func DownloadData(status int, mimetype string, filename string, data []byte) HTTPResponse {
return &downloadDataHTTPResponse{statusCode: status, mimetype: mimetype, data: data, filename: &filename}
}

56
ginext/responseEmpty.go Normal file
View File

@@ -0,0 +1,56 @@
package ginext
import (
"github.com/gin-gonic/gin"
"git.blackforestbytes.com/BlackForestBytes/goext/langext"
)
type emptyHTTPResponse struct {
statusCode int
headers []headerval
cookies []cookieval
}
func (j emptyHTTPResponse) Write(g *gin.Context) {
for _, v := range j.headers {
g.Header(v.Key, v.Val)
}
for _, v := range j.cookies {
g.SetCookie(v.name, v.value, v.maxAge, v.path, v.domain, v.secure, v.httpOnly)
}
g.Status(j.statusCode)
}
func (j emptyHTTPResponse) WithHeader(k string, v string) HTTPResponse {
j.headers = append(j.headers, headerval{k, v})
return j
}
func (j emptyHTTPResponse) WithCookie(name string, value string, maxAge int, path string, domain string, secure bool, httpOnly bool) HTTPResponse {
j.cookies = append(j.cookies, cookieval{name, value, maxAge, path, domain, secure, httpOnly})
return j
}
func (j emptyHTTPResponse) IsSuccess() bool {
return j.statusCode >= 200 && j.statusCode <= 399
}
func (j emptyHTTPResponse) Statuscode() int {
return j.statusCode
}
func (j emptyHTTPResponse) BodyString(*gin.Context) *string {
return nil
}
func (j emptyHTTPResponse) ContentType() string {
return ""
}
func (j emptyHTTPResponse) Headers() []string {
return langext.ArrMap(j.headers, func(v headerval) string { return v.Key + "=" + v.Val })
}
func Status(sc int) HTTPResponse {
return &emptyHTTPResponse{statusCode: sc}
}

73
ginext/responseFile.go Normal file
View File

@@ -0,0 +1,73 @@
package ginext
import (
"fmt"
"github.com/gin-gonic/gin"
"git.blackforestbytes.com/BlackForestBytes/goext/langext"
"os"
)
type fileHTTPResponse struct {
mimetype string
filepath string
filename *string
headers []headerval
cookies []cookieval
}
func (j fileHTTPResponse) Write(g *gin.Context) {
g.Header("Content-Type", j.mimetype) // if we don't set it here gin does weird file-sniffing later...
if j.filename != nil {
g.Header("Content-Disposition", fmt.Sprintf("attachment; filename=\"%s\"", *j.filename))
}
for _, v := range j.headers {
g.Header(v.Key, v.Val)
}
for _, v := range j.cookies {
g.SetCookie(v.name, v.value, v.maxAge, v.path, v.domain, v.secure, v.httpOnly)
}
g.File(j.filepath)
}
func (j fileHTTPResponse) WithHeader(k string, v string) HTTPResponse {
j.headers = append(j.headers, headerval{k, v})
return j
}
func (j fileHTTPResponse) WithCookie(name string, value string, maxAge int, path string, domain string, secure bool, httpOnly bool) HTTPResponse {
j.cookies = append(j.cookies, cookieval{name, value, maxAge, path, domain, secure, httpOnly})
return j
}
func (j fileHTTPResponse) IsSuccess() bool {
return true
}
func (j fileHTTPResponse) Statuscode() int {
return 200
}
func (j fileHTTPResponse) BodyString(*gin.Context) *string {
data, err := os.ReadFile(j.filepath)
if err != nil {
return nil
}
return langext.Ptr(string(data))
}
func (j fileHTTPResponse) ContentType() string {
return j.mimetype
}
func (j fileHTTPResponse) Headers() []string {
return langext.ArrMap(j.headers, func(v headerval) string { return v.Key + "=" + v.Val })
}
func File(mimetype string, filepath string) HTTPResponse {
return &fileHTTPResponse{mimetype: mimetype, filepath: filepath}
}
func Download(mimetype string, filepath string, filename string) HTTPResponse {
return &fileHTTPResponse{mimetype: mimetype, filepath: filepath, filename: &filename}
}

78
ginext/responseJson.go Normal file
View File

@@ -0,0 +1,78 @@
package ginext
import (
"github.com/gin-gonic/gin"
json "git.blackforestbytes.com/BlackForestBytes/goext/gojson"
"git.blackforestbytes.com/BlackForestBytes/goext/langext"
)
type jsonHTTPResponse struct {
statusCode int
data any
headers []headerval
cookies []cookieval
filterOverride *string
}
func (j jsonHTTPResponse) jsonRenderer(g *gin.Context) json.GoJsonRender {
var f *string
if jsonfilter := g.GetString(jsonFilterKey); jsonfilter != "" {
f = &jsonfilter
}
if j.filterOverride != nil {
f = j.filterOverride
}
return json.GoJsonRender{Data: j.data, NilSafeSlices: true, NilSafeMaps: true, Filter: f}
}
func (j jsonHTTPResponse) Write(g *gin.Context) {
for _, v := range j.headers {
g.Header(v.Key, v.Val)
}
for _, v := range j.cookies {
g.SetCookie(v.name, v.value, v.maxAge, v.path, v.domain, v.secure, v.httpOnly)
}
g.Render(j.statusCode, j.jsonRenderer(g))
}
func (j jsonHTTPResponse) WithHeader(k string, v string) HTTPResponse {
j.headers = append(j.headers, headerval{k, v})
return j
}
func (j jsonHTTPResponse) WithCookie(name string, value string, maxAge int, path string, domain string, secure bool, httpOnly bool) HTTPResponse {
j.cookies = append(j.cookies, cookieval{name, value, maxAge, path, domain, secure, httpOnly})
return j
}
func (j jsonHTTPResponse) IsSuccess() bool {
return j.statusCode >= 200 && j.statusCode <= 399
}
func (j jsonHTTPResponse) Statuscode() int {
return j.statusCode
}
func (j jsonHTTPResponse) BodyString(g *gin.Context) *string {
if str, err := j.jsonRenderer(g).RenderString(); err == nil {
return &str
} else {
return nil
}
}
func (j jsonHTTPResponse) ContentType() string {
return "application/json"
}
func (j jsonHTTPResponse) Headers() []string {
return langext.ArrMap(j.headers, func(v headerval) string { return v.Key + "=" + v.Val })
}
func JSON(sc int, data any) HTTPResponse {
return &jsonHTTPResponse{statusCode: sc, data: data}
}
func JSONWithFilter(sc int, data any, f string) HTTPResponse {
return &jsonHTTPResponse{statusCode: sc, data: data, filterOverride: &f}
}

81
ginext/responseJsonAPI.go Normal file
View File

@@ -0,0 +1,81 @@
package ginext
import (
"context"
"github.com/gin-gonic/gin"
"git.blackforestbytes.com/BlackForestBytes/goext/exerr"
"git.blackforestbytes.com/BlackForestBytes/goext/langext"
)
type jsonAPIErrResponse struct {
err *exerr.ExErr
headers []headerval
cookies []cookieval
}
func (j jsonAPIErrResponse) Error() error {
return j.err
}
func (j jsonAPIErrResponse) Write(g *gin.Context) {
for _, v := range j.headers {
g.Header(v.Key, v.Val)
}
for _, v := range j.cookies {
g.SetCookie(v.name, v.value, v.maxAge, v.path, v.domain, v.secure, v.httpOnly)
}
exerr.Get(j.err).Output(context.Background(), g)
j.err.CallListener(exerr.MethodOutput, exerr.ListenerOpt{NoLog: false})
}
func (j jsonAPIErrResponse) WithHeader(k string, v string) HTTPResponse {
j.headers = append(j.headers, headerval{k, v})
return j
}
func (j jsonAPIErrResponse) WithCookie(name string, value string, maxAge int, path string, domain string, secure bool, httpOnly bool) HTTPResponse {
j.cookies = append(j.cookies, cookieval{name, value, maxAge, path, domain, secure, httpOnly})
return j
}
func (j jsonAPIErrResponse) IsSuccess() bool {
return false
}
func (j jsonAPIErrResponse) Statuscode() int {
return langext.Coalesce(j.err.RecursiveStatuscode(), 0)
}
func (j jsonAPIErrResponse) BodyString(*gin.Context) *string {
if str, err := j.err.ToDefaultAPIJson(); err == nil {
return &str
} else {
return nil
}
}
func (j jsonAPIErrResponse) ContentType() string {
return "application/json"
}
func (j jsonAPIErrResponse) Headers() []string {
return langext.ArrMap(j.headers, func(v headerval) string { return v.Key + "=" + v.Val })
}
func (j jsonAPIErrResponse) Unwrap() error {
return j.err
}
func Error(e error) HTTPResponse {
return &jsonAPIErrResponse{
err: exerr.FromError(e),
}
}
func ErrWrap(e error, errorType exerr.ErrorType, msg string) HTTPResponse {
return &jsonAPIErrResponse{
err: exerr.FromError(exerr.Wrap(e, msg).WithType(errorType).Build()),
}
}

View File

@@ -0,0 +1,57 @@
package ginext
import (
"github.com/gin-gonic/gin"
"git.blackforestbytes.com/BlackForestBytes/goext/langext"
)
type redirectHTTPResponse struct {
statusCode int
url string
headers []headerval
cookies []cookieval
}
func (j redirectHTTPResponse) Write(g *gin.Context) {
for _, v := range j.headers {
g.Header(v.Key, v.Val)
}
for _, v := range j.cookies {
g.SetCookie(v.name, v.value, v.maxAge, v.path, v.domain, v.secure, v.httpOnly)
}
g.Redirect(j.statusCode, j.url)
}
func (j redirectHTTPResponse) WithHeader(k string, v string) HTTPResponse {
j.headers = append(j.headers, headerval{k, v})
return j
}
func (j redirectHTTPResponse) WithCookie(name string, value string, maxAge int, path string, domain string, secure bool, httpOnly bool) HTTPResponse {
j.cookies = append(j.cookies, cookieval{name, value, maxAge, path, domain, secure, httpOnly})
return j
}
func (j redirectHTTPResponse) IsSuccess() bool {
return j.statusCode >= 200 && j.statusCode <= 399
}
func (j redirectHTTPResponse) Statuscode() int {
return j.statusCode
}
func (j redirectHTTPResponse) BodyString(*gin.Context) *string {
return nil
}
func (j redirectHTTPResponse) ContentType() string {
return ""
}
func (j redirectHTTPResponse) Headers() []string {
return langext.ArrMap(j.headers, func(v headerval) string { return v.Key + "=" + v.Val })
}
func Redirect(sc int, newURL string) HTTPResponse {
return &redirectHTTPResponse{statusCode: sc, url: newURL}
}

View File

@@ -0,0 +1,72 @@
package ginext
import (
"github.com/gin-gonic/gin"
"git.blackforestbytes.com/BlackForestBytes/goext/exerr"
"git.blackforestbytes.com/BlackForestBytes/goext/langext"
"io"
"net/http"
"time"
)
type seekableResponse struct {
data io.ReadSeeker
contentType string
filename string
headers []headerval
cookies []cookieval
}
func (j seekableResponse) Write(g *gin.Context) {
g.Header("Content-Type", j.contentType) // if we don't set it here http.ServeContent does weird sniffing later...
for _, v := range j.headers {
g.Header(v.Key, v.Val)
}
for _, v := range j.cookies {
g.SetCookie(v.name, v.value, v.maxAge, v.path, v.domain, v.secure, v.httpOnly)
}
http.ServeContent(g.Writer, g.Request, j.filename, time.Unix(0, 0), j.data)
if clsr, ok := j.data.(io.ReadSeekCloser); ok {
err := clsr.Close()
if err != nil {
exerr.Wrap(err, "failed to close io.ReadSeerkClose in ginext.Seekable").Str("filename", j.filename).Print()
}
}
}
func (j seekableResponse) WithHeader(k string, v string) HTTPResponse {
j.headers = append(j.headers, headerval{k, v})
return j
}
func (j seekableResponse) WithCookie(name string, value string, maxAge int, path string, domain string, secure bool, httpOnly bool) HTTPResponse {
j.cookies = append(j.cookies, cookieval{name, value, maxAge, path, domain, secure, httpOnly})
return j
}
func (j seekableResponse) IsSuccess() bool {
return true
}
func (j seekableResponse) Statuscode() int {
return 200
}
func (j seekableResponse) BodyString(*gin.Context) *string {
return langext.Ptr("(seekable)")
}
func (j seekableResponse) ContentType() string {
return j.contentType
}
func (j seekableResponse) Headers() []string {
return langext.ArrMap(j.headers, func(v headerval) string { return v.Key + "=" + v.Val })
}
func Seekable(filename string, contentType string, data io.ReadSeeker) HTTPResponse {
return &seekableResponse{filename: filename, contentType: contentType, data: data}
}

57
ginext/responseText.go Normal file
View File

@@ -0,0 +1,57 @@
package ginext
import (
"github.com/gin-gonic/gin"
"git.blackforestbytes.com/BlackForestBytes/goext/langext"
)
type textHTTPResponse struct {
statusCode int
data string
headers []headerval
cookies []cookieval
}
func (j textHTTPResponse) Write(g *gin.Context) {
for _, v := range j.headers {
g.Header(v.Key, v.Val)
}
for _, v := range j.cookies {
g.SetCookie(v.name, v.value, v.maxAge, v.path, v.domain, v.secure, v.httpOnly)
}
g.String(j.statusCode, "%s", j.data)
}
func (j textHTTPResponse) WithHeader(k string, v string) HTTPResponse {
j.headers = append(j.headers, headerval{k, v})
return j
}
func (j textHTTPResponse) WithCookie(name string, value string, maxAge int, path string, domain string, secure bool, httpOnly bool) HTTPResponse {
j.cookies = append(j.cookies, cookieval{name, value, maxAge, path, domain, secure, httpOnly})
return j
}
func (j textHTTPResponse) IsSuccess() bool {
return j.statusCode >= 200 && j.statusCode <= 399
}
func (j textHTTPResponse) Statuscode() int {
return j.statusCode
}
func (j textHTTPResponse) BodyString(*gin.Context) *string {
return langext.Ptr(j.data)
}
func (j textHTTPResponse) ContentType() string {
return "text/plain"
}
func (j textHTTPResponse) Headers() []string {
return langext.ArrMap(j.headers, func(v headerval) string { return v.Key + "=" + v.Val })
}
func Text(sc int, data string) HTTPResponse {
return &textHTTPResponse{statusCode: sc, data: data}
}

273
ginext/routes.go Normal file
View File

@@ -0,0 +1,273 @@
package ginext
import (
"git.blackforestbytes.com/BlackForestBytes/goext/langext"
"github.com/gin-gonic/gin"
"net/http"
"path"
"reflect"
"runtime"
"strings"
)
var anyMethods = []string{
http.MethodGet, http.MethodPost, http.MethodPut, http.MethodPatch,
http.MethodHead, http.MethodOptions, http.MethodDelete, http.MethodConnect,
http.MethodTrace,
}
type GinRoutesWrapper struct {
wrapper *GinWrapper
routes gin.IRouter
absPath string
defaultHandler []gin.HandlerFunc
}
type GinRouteBuilder struct {
routes *GinRoutesWrapper
method string
relPath string
absPath string
handlers []gin.HandlerFunc
}
func (w *GinWrapper) Routes() *GinRoutesWrapper {
return &GinRoutesWrapper{
wrapper: w,
routes: w.engine,
absPath: "",
defaultHandler: make([]gin.HandlerFunc, 0),
}
}
func (w *GinRoutesWrapper) Group(relativePath string) *GinRoutesWrapper {
return &GinRoutesWrapper{
wrapper: w.wrapper,
routes: w.routes.Group(relativePath),
defaultHandler: langext.ArrCopy(w.defaultHandler),
absPath: joinPaths(w.absPath, relativePath),
}
}
func (w *GinRoutesWrapper) Use(middleware ...gin.HandlerFunc) *GinRoutesWrapper {
defHandler := langext.ArrCopy(w.defaultHandler)
defHandler = append(defHandler, middleware...)
return &GinRoutesWrapper{wrapper: w.wrapper, routes: w.routes, defaultHandler: defHandler, absPath: w.absPath}
}
func (w *GinRoutesWrapper) WithJSONFilter(filter string) *GinRoutesWrapper {
return w.Use(func(g *gin.Context) { g.Set(jsonFilterKey, filter) })
}
func (w *GinRoutesWrapper) GET(relativePath string) *GinRouteBuilder {
return w._route(http.MethodGet, relativePath)
}
func (w *GinRoutesWrapper) POST(relativePath string) *GinRouteBuilder {
return w._route(http.MethodPost, relativePath)
}
func (w *GinRoutesWrapper) DELETE(relativePath string) *GinRouteBuilder {
return w._route(http.MethodDelete, relativePath)
}
func (w *GinRoutesWrapper) PATCH(relativePath string) *GinRouteBuilder {
return w._route(http.MethodPatch, relativePath)
}
func (w *GinRoutesWrapper) PUT(relativePath string) *GinRouteBuilder {
return w._route(http.MethodPut, relativePath)
}
func (w *GinRoutesWrapper) OPTIONS(relativePath string) *GinRouteBuilder {
return w._route(http.MethodOptions, relativePath)
}
func (w *GinRoutesWrapper) HEAD(relativePath string) *GinRouteBuilder {
return w._route(http.MethodHead, relativePath)
}
func (w *GinRoutesWrapper) COUNT(relativePath string) *GinRouteBuilder {
return w._route("COUNT", relativePath)
}
func (w *GinRoutesWrapper) Any(relativePath string) *GinRouteBuilder {
return w._route("*", relativePath)
}
func (w *GinRoutesWrapper) _route(method string, relativePath string) *GinRouteBuilder {
return &GinRouteBuilder{
routes: w,
method: method,
relPath: relativePath,
absPath: joinPaths(w.absPath, relativePath),
handlers: langext.ArrCopy(w.defaultHandler),
}
}
func (w *GinRouteBuilder) Use(middleware ...gin.HandlerFunc) *GinRouteBuilder {
w.handlers = append(w.handlers, middleware...)
return w
}
func (w *GinRouteBuilder) WithJSONFilter(filter string) *GinRouteBuilder {
return w.Use(func(g *gin.Context) { g.Set(jsonFilterKey, filter) })
}
func (w *GinRouteBuilder) Handle(handler WHandlerFunc) {
if w.routes.wrapper.bufferBody {
arr := make([]gin.HandlerFunc, 0, len(w.handlers)+1)
arr = append(arr, BodyBuffer)
arr = append(arr, w.handlers...)
w.handlers = arr
}
middlewareNames := langext.ArrMap(w.handlers, func(v gin.HandlerFunc) string { return nameOfFunction(v) })
handlerName := nameOfFunction(handler)
w.handlers = append(w.handlers, Wrap(w.routes.wrapper, handler))
methodName := w.method
if w.method == "*" {
methodName = "ANY"
for _, method := range anyMethods {
w.routes.routes.Handle(method, w.relPath, w.handlers...)
}
} else {
w.routes.routes.Handle(w.method, w.relPath, w.handlers...)
}
w.routes.wrapper.routeSpecs = append(w.routes.wrapper.routeSpecs, ginRouteSpec{
Method: methodName,
URL: w.absPath,
Middlewares: middlewareNames,
Handler: handlerName,
})
}
func (w *GinRouteBuilder) HandleRawHTTPHandler(f http.Handler) {
if w.routes.wrapper.bufferBody {
arr := make([]gin.HandlerFunc, 0, len(w.handlers)+1)
arr = append(arr, BodyBuffer)
arr = append(arr, w.handlers...)
w.handlers = arr
}
middlewareNames := langext.ArrMap(w.handlers, func(v gin.HandlerFunc) string { return nameOfFunction(v) })
w.handlers = append(w.handlers, WrapHTTPHandler(w.routes.wrapper, f))
methodName := w.method
if w.method == "*" {
methodName = "ANY"
for _, method := range anyMethods {
w.routes.routes.Handle(method, w.relPath, w.handlers...)
}
} else {
w.routes.routes.Handle(w.method, w.relPath, w.handlers...)
}
w.routes.wrapper.routeSpecs = append(w.routes.wrapper.routeSpecs, ginRouteSpec{
Method: methodName,
URL: w.absPath,
Middlewares: middlewareNames,
Handler: "[HTTPHandler]",
})
}
func (w *GinRouteBuilder) HandleRawHTTPHandlerFunc(f http.HandlerFunc) {
if w.routes.wrapper.bufferBody {
arr := make([]gin.HandlerFunc, 0, len(w.handlers)+1)
arr = append(arr, BodyBuffer)
arr = append(arr, w.handlers...)
w.handlers = arr
}
middlewareNames := langext.ArrMap(w.handlers, func(v gin.HandlerFunc) string { return nameOfFunction(v) })
w.handlers = append(w.handlers, WrapHTTPHandlerFunc(w.routes.wrapper, f))
methodName := w.method
if w.method == "*" {
methodName = "ANY"
for _, method := range anyMethods {
w.routes.routes.Handle(method, w.relPath, w.handlers...)
}
} else {
w.routes.routes.Handle(w.method, w.relPath, w.handlers...)
}
w.routes.wrapper.routeSpecs = append(w.routes.wrapper.routeSpecs, ginRouteSpec{
Method: methodName,
URL: w.absPath,
Middlewares: middlewareNames,
Handler: "[HTTPHandlerFunc]",
})
}
func (w *GinWrapper) NoRoute(handler WHandlerFunc) {
handlers := make([]gin.HandlerFunc, 0)
if w.bufferBody {
handlers = append(handlers, BodyBuffer)
}
middlewareNames := langext.ArrMap(handlers, func(v gin.HandlerFunc) string { return nameOfFunction(v) })
handlerName := nameOfFunction(handler)
handlers = append(handlers, Wrap(w, handler))
w.engine.NoRoute(handlers...)
w.routeSpecs = append(w.routeSpecs, ginRouteSpec{
Method: "ANY",
URL: "[NO_ROUTE]",
Middlewares: middlewareNames,
Handler: handlerName,
})
}
func nameOfFunction(f any) string {
fname := runtime.FuncForPC(reflect.ValueOf(f).Pointer()).Name()
split := strings.Split(fname, "/")
if len(split) == 0 {
return ""
}
fname = split[len(split)-1]
// https://stackoverflow.com/a/32925345/1761622
if strings.HasSuffix(fname, "-fm") {
fname = fname[:len(fname)-len("-fm")]
}
return fname
}
// joinPaths is copied verbatim from gin@v1.9.1/gin.go
func joinPaths(absolutePath, relativePath string) string {
if relativePath == "" {
return absolutePath
}
finalPath := path.Join(absolutePath, relativePath)
if lastChar(relativePath) == '/' && lastChar(finalPath) != '/' {
return finalPath + "/"
}
return finalPath
}
func lastChar(str string) uint8 {
if str == "" {
panic("The length of the string can't be 0")
}
return str[len(str)-1]
}

11
ginext/session.go Normal file
View File

@@ -0,0 +1,11 @@
package ginext
import (
"context"
"github.com/gin-gonic/gin"
)
type SessionObject interface {
Init(g *gin.Context, ctx *AppContext) error
Finish(ctx context.Context, resp HTTPResponse) error
}

66
go.mod
View File

@@ -1,10 +1,66 @@
module gogs.mikescher.com/BlackForestBytes/goext module git.blackforestbytes.com/BlackForestBytes/goext
go 1.19 go 1.23.0
toolchain go1.24.0
require ( require (
golang.org/x/sys v0.1.0 github.com/gin-gonic/gin v1.10.1
golang.org/x/term v0.1.0 github.com/glebarez/go-sqlite v1.22.0 // only needed for tests -.-
github.com/jmoiron/sqlx v1.4.0
github.com/rs/xid v1.6.0
github.com/rs/zerolog v1.34.0
go.mongodb.org/mongo-driver v1.17.4
golang.org/x/crypto v0.40.0
golang.org/x/sys v0.34.0
golang.org/x/term v0.33.0
) )
require github.com/jmoiron/sqlx v1.3.5 // indirect require (
github.com/disintegration/imaging v1.6.2
github.com/jung-kurt/gofpdf v1.16.2
golang.org/x/net v0.42.0
golang.org/x/sync v0.16.0
)
require (
github.com/bytedance/sonic v1.13.3 // indirect
github.com/bytedance/sonic/loader v0.3.0 // indirect
github.com/cloudwego/base64x v0.1.5 // indirect
github.com/cloudwego/iasm v0.2.0 // indirect
github.com/dustin/go-humanize v1.0.1 // indirect
github.com/gabriel-vasile/mimetype v1.4.9 // indirect
github.com/gin-contrib/sse v1.1.0 // indirect
github.com/go-playground/locales v0.14.1 // indirect
github.com/go-playground/universal-translator v0.18.1 // indirect
github.com/go-playground/validator/v10 v10.27.0 // indirect
github.com/goccy/go-json v0.10.5 // indirect
github.com/golang/snappy v1.0.0 // indirect
github.com/google/uuid v1.5.0 // indirect
github.com/json-iterator/go v1.1.12 // indirect
github.com/klauspost/compress v1.18.0 // indirect
github.com/klauspost/cpuid/v2 v2.3.0 // indirect
github.com/leodido/go-urn v1.4.0 // indirect
github.com/mattn/go-colorable v0.1.14 // indirect
github.com/mattn/go-isatty v0.0.20 // indirect
github.com/modern-go/concurrent v0.0.0-20180306012644-bacd9c7ef1dd // indirect
github.com/modern-go/reflect2 v1.0.2 // indirect
github.com/montanaflynn/stats v0.7.1 // indirect
github.com/pelletier/go-toml/v2 v2.2.4 // indirect
github.com/remyoudompheng/bigfft v0.0.0-20230129092748-24d4a6f8daec // indirect
github.com/twitchyliquid64/golang-asm v0.15.1 // indirect
github.com/ugorji/go/codec v1.3.0 // indirect
github.com/xdg-go/pbkdf2 v1.0.0 // indirect
github.com/xdg-go/scram v1.1.2 // indirect
github.com/xdg-go/stringprep v1.0.4 // indirect
github.com/youmark/pkcs8 v0.0.0-20240726163527-a2c0da244d78 // indirect
golang.org/x/arch v0.19.0 // indirect
golang.org/x/image v0.29.0 // indirect
golang.org/x/text v0.27.0 // indirect
google.golang.org/protobuf v1.36.6 // indirect
gopkg.in/yaml.v3 v3.0.1 // indirect
modernc.org/libc v1.37.6 // indirect
modernc.org/mathutil v1.6.0 // indirect
modernc.org/memory v1.7.2 // indirect
modernc.org/sqlite v1.28.0 // indirect
)

Some files were not shown because too many files have changed in this diff Show More