login.fr.cloud.gov 2019 oauth redirect_uri vulnerability report

i’ve been reading oauth vulnerability reports to better understand the various attack vectors. a common one is a case where the redirect_uri is not validated as part of the initial authorization request. i found an old hackerone report about this exact scenario

here’s the report

ok so login.fr.cloud.gov is clearly the authorization server. the attacker is using a registered oauth client with the provider that is configured with a different set of redirect_uris. im going to guess the attacker doesn’t actually own the client app in this case. the client_id is not a secret, so its fairly easy to get a hold of. once the attacker in this case constructs the link with their own custom redirect param, they can share it

as part of a phishing attack, an unsuspecting user can click on that link and authorize access. upon success, they will be redirected to a valid redirect url with a code parameter that can be exchanged for an access token. now that access tokens can be made against the server for user data

the reporter here is suggesting that an attacker can provide any malicious url, so that the authz code actually gets redirected to evil.com where the attacker can retrieve the code. for them to really do anything with the code, we probably have to assume that this is an implicit grant flow where the authorization step actually redirects with an access token (actually this reporter does mention access token in the report, so thats probably a safe assumption). otherwise, an attacker can’t really do much with the authz code without the oauth clients full credentials.

the simplest way to mitigate this attack is to make sure redirect uris are validated properly. nowdays it’s also not recommended to use implicit grants that do not require the oauth client to present an authz token along with protected credentials. if you have a confidential oauth app, you need to use the authorization code flow.

crafting interpreters 25.2 – compiler upvalues

the purpose of closure objects is to hold references to closed over variables. but how does it find those variables if they may or may not be on the stack? we cant rely on the exact mechanism of local resolution because locals are always guaranteed to be on the stack during a functions execution!

Since local variables are lexically scoped in Lox, we have enough knowledge at compile time to resolve which surrounding local variables a function accesses and where those locals are declared. That, in turn, means we know how many upvalues a closure needs, which variables they capture, and which stack slots contain those variables in the declaring function’s stack window.

the new abstraction introduced here is something called an upvalue. an upvalue is what the compiler sees as a closed over variable. what bob is saying above is that we can figure out exactly what our upvalues are at compile time and make sure that at runtime those variables are accessible on the vm stack

exactly how that is done is a bit more complicated and its not immediately clear when reading the section on upvalues how the implementation supports the eventual runtime variable capturing behavior. in a way he’s basically saying, here’s how we want to compile upvalues – trust me we’ll need this information at runtime when we create closures.

one of the first questions i had reading this section was, how does the vm at runtime, given these upvalue indices, differentiate between locals and upvalues? we know that locals get pushed onto the vm stack when they’re referenced by other expressions and the OP_RESOLVE_LOCAL calls index into the relative position of the stack inside call frames. but what about upvalues? not all upvalues are necessarily on the stack.

this wasn’t answered until later when he added an array of pointers to upvalues (ObjUpvalue** upvalues;) to closure objects. so these indices we’re building at compile time are going to index into that array in our closures. since these are pointers, they could be pointing at either captured that are still on the stack or maybe ones that bob eventually moves onto the heap.

from objfuncs to objclosures

at compile time, at the end of a functions block compilation we now emit a new instruction OP_CLOSURE that the VM will use at runtime to wrap our function objects within a new closure object. the idea is that we’re going to use this closure object to store references to closed over variables (upvalues).

as a refresher, each time we create a compiler instance per function declaration, we also create a new function object via newFunction.

 static void function(FunctionType type) {
    Compiler compiler;
    initCompiler(&compiler, type);
    beginScope();
    ...
    ObjFunction* function = endCompiler();
    // emit a closure instruction!
    emitBytes(OP_CLOSURE, makeConstant(OBJ_VAL(function)));
  }
  
  ...
  
 static void initCompiler(Compiler* compiler, FunctionType type) {
  compiler->enclosing = current;
  compiler->function = NULL;
  compiler->type = type;
  compiler->localCount = 0;
  compiler->scopeDepth = 0;
  compiler->function = newFunction();
  current = compiler;
  if (type != TYPE_SCRIPT) {
    current->function->name = copyString(parser.previous.start,
                                         parser.previous.length);
  }

now at the end of the function compilation we make sure to emit an OP_CLOSURE so that at runtime, we use that opcode to wrap the raw ObjFunction in a closure and push it onto the stack.

below is the disassembly of fun foo() { fun bar(){} }

> fun foo() { fun bar(){} }
== bar ==
0000    1 OP_NIL
0001    | OP_RETURN
== foo ==
0000    1 OP_CLOSURE          0 <fn bar>
0002    | OP_NIL
0003    | OP_RETURN
== <script> ==
0000    1 OP_CLOSURE          1 <fn foo>
0002    | OP_DEFINE_GLOBAL    0 'foo'
0004    2 OP_NIL
0005    | OP_RETURN
          [ <script> ]
0000    1 OP_CLOSURE          1 <fn foo>
          [ <script> ][ <fn foo> ]
0002    | OP_DEFINE_GLOBAL    0 'foo'
          [ <script> ]
0004    2 OP_NIL
          [ <script> ][ nil ]
0005    | OP_RETURN

there’s a couple of interesting things about this design choice

  • every function, regardless of whether they close over variables, will be treated like a closure at runtime. this adds both overhead through the creation of each closure function and indirection
  • closed over values are stored on the clojure instead of the function, which nicely reflects the reality that we made have multiple different closures of the same function!

calls and functions and why fixed stack locals don’t work

so far we’ve only been writing statements at the top level of the program. there’s no notion of a callable chunk of code. with the introduction of functions in chapter 24, all the current top level states like the compiler, locals, and chunks / instructions are moved into function objects

previously with locals we were effectively operating in a single function world. this effectively meant that all locals were allocated at the beginning of the global call stack. with functions that each have their own local environments, the author introduces an early idea that was implemented by fortran where different functions had their own fixed set of locals

this works if there’s no recursion and i’ll demo an example that shows why fixed, separate slots break down once you start to recurse:

fun factorial(n) {
    if (n <= 1) return 1;
    return n * factorial(n - 1);
}

factorial(3);

assume we give factorial its own fixed set of stack slots

Slot 0: parameter n
Slot 1: temporary result for multiplication (the value in slot 0 * factorial (slot 0 – 1))

now call factorial(2). this produces slot 0 = 2 and slot 1 = 2 * factorial(1)

then call factorial(1). this produces slot 0 = 1

OH CRAP, but that just overwrote slot 0 = 2 which we need to compute 2 * factorial (1) from the previous call. except now it ends up calling 1 * factorial(0) and screws up the entire expression

bob notes that fortran was able to get away with fixed stack slots simply because they didn’t support recursion!

lox vm local variables visualization

in chap 22 of crafting interpreters, bob nystrom walks through the implementation of local variables. it makes efficient use of memory by tracking local variable position and scope metadata during compilation phase and leveraging that to locate the correct value in the immediate proximity within the execution stack (where we expect all local variables to end up, unlike globals which are late bound and may be defined far away from where they’re actually used).

what i found most complicated about this chapter is the number of states you need to track and hold to understand how the compile and runtime stages work together. it helped me to write down a few essential states in trying to understand it, so i figured i translate those notes to some sort of visualization because i think it might help others too

here’s a visualization of the compile phase where we’re converting the tokens into a byte code instruction sequence (chunks). the arrow indicates the parse position where the vm is pointing to the source code and the variables on the right represent the state at that point.

side note: i didn’t bother doing character by character – i moved the arrow to positions where there are actually side-effects since not all tokens produce the sideeffects i actually care about for this demo.

and here is the runtime execution of the resulting byte code sequence. as you can see, the first thing that happens is that the literal number 13 is pushed onto the stack. every variable declaration’s value will be known at compile time.

however, notice that there is no information about what the name of that constant is. is 13 the value of “foo”? or something else? what’s cool about this implementation is that it doesn’t matter at this time because during the compilation phase, we’ve already figured out where that local is going to be on the stack for the variable foo. based on the information about locals and off sets in the previous phase, it’s going to be at position or offset 0 based on the metadata from the locals array that was getting constructed at compile time.

first half marathon and training plan

i’m planning on doing my first half marathon this year, the syracuse half marathon! i’m also going to be posting my training updates here, mostly for myself to refer to

the race is on march 23, so that’s 10-11 weeks from now. that’s plenty of time for a good training block. my A goal is to finish in 1:45 (about 8:03 mile pace, 25min 5k pace), my B goal is to finish in 1:50, and C goal is to finish just somewhere close to two hours (this is all based on my current threshold pace for 5k which i think is around 7:45 – 8:15 mpm) and keep it conservative.

training wise i’m adapting the novice marathon program in hal higdons Marathon Guide book for a half marathon. a couple of interesting parts of his training program is the long run mile step back every 3rd week and gradual increase of the mid week mileage. the purpose of the step back is to support recovery after a couple of consecutive mile increases before building back higher

unlike his program for the novice training instead of doing saturday long run i’m doing a sunday one followed by a recovery run. he also packs all three non-long runs together consecutively but i like having space between those runs for cross training / strength training or just rest – so i adjusted that too.

overall i’m optimistic about this program because it’s not too far off from my current weekly mileage and i’m coming off of a short break from running due to the weather lately, so i should adapt well to this but who knows. since i am going to be targeting a specific pace i know i need to throw some speed work and threshold runs in there so the breaks between runs mid week should help

here’s my full schedule (thanks claude ai for formatting my original csv into a table)

note: run 1 is following a long run, so that will be an easy run. run 2 and 3 will either both be easy if i’m not feeling great, but ideally one or both of them are threshold runs. will play it mostly by ear

WeekRun 1Run 2Run 3Long RunTotal Miles
1333615
2333716
3343717
4343515
5343919
63531021
7353718
83631224
93631022
10363820

for race pace and finish times i like to use this chart.

training log

i’m going to keep short updates here as i progress

1/15

  • training going well, been hitting the workouts and also did a tuesday short 45min group running training sesh (polymetrics mostly) at gym
  • today did a 3miler on treadmill, 10min warmp up and 10 cooldown with threshold pace in middle
  • TIL that 1% incline is good for imitating wind resistance friction for treadmill + lower knee impact. makes sense
  • form / mechanics notes: working on landing softer, more knee drive and less lower leg extension
  • pace feels a bit quick – will work on increasing incline but reducing pace
  • also may look into interleaving outdoor runs with treadmills at some point, weather permitting…

1/16

  • OK, so today i think i’m officially starting to overtrain…. i did a 1hr yoga at 5:30, 45min circuit training and sprinting at 8 followed by a 3 mile threshold. um my right foot ankle felt wonky and weird to put pressure on. i think i also laced my shoes too tight on the right
  • i ALSO tried a slightly different gait (shortening leg extension to land closer to my center of mass) at the same time, which honestly felt better
  • i made an effort to run more lightly on the treadmill today (focusing on reducing impact sound mostly) and my stride felt much smoother
  • anyway, im pretty much set with my mid week mileage (9 total so far) so im just gonna rest up for my long run over the weekend (6mi)

1/19

  • completed my first long run of the halfy training!! ran outdoors for 6.2 miles. most of the route i picked was pretty snow packed and my legs were sinking with each stride. left calf muscles and achilles feel pretty sore – not sure if snow or new shoes (lone peak altras) or both
  • i had to avoid sidewalks in a couple of .5 – 1 mile stretches and ran pretty close to threshold pace because i wanted to get off the road quickly
  • feeling good though, the snow def. forced me to slow down for most of it. got to get in a bit of hill work at beginning at end too. overall great workout

1/20

  • did a 3 miler today, started at easy pace and then did threshold for about a mile before dropping back to easy. good workout, but in future im going to try to hold the pace for the entire session, and reserve threshold workouts for specific days. it does seem like the treadmill picks up my HR reader so that’s good!

1/22

  • currently in week 2 of my training block. did a 5km with 3 1k repeats at 90-95% max HR . felt really tough, esp. towards the end. i actually cut short the last repeat by about 200m for sake of time and also i was at my limit
  • good workout, but i think maybe a shorter interval like 400m would be good followed by 30s to 1m rests in future
  • wore olympus via 2 for first time today, bought a used pair for 80 bucks off ebay. really loving this new model. the via 1 has a very stiff / firm sole, and they seemed to have incorporated that feedback. on their site: “It’s that same high stack but with a softer midsole foam”. really like it now! might retire my via 1’s

1/24

  • did my final midweek 3mi workout of week 2! really didn’t really feel like it because I didn’t sleep too well and its cold af. snow day and streets were a bit unplowed so I decided to scrap my original plan of going to the gym and instead go outside. wasn’t too bad, although starting earlier might be better because a lot of people around 7-8 were pulling out of their garages to go to work
  • did a fartlek workout with about 100m hill repeats 3 times . the entire run was pretty hilly , about 300ft elevation gain so 100ft / mile . pretty tough workout
  • felt slight ache on left knee (weirdly my right knee has not bothered me at all) that went away with a quad stretch. will keep an eye on

1/27

  • long run on sunday ended up being 8 mi instead of 7 b.c of wrong turn. felt good. went out a bit too fast. should stick to 10-9min miles. ended up in a snowy patch which wasn’t great and had to walk for safety reasons
  • need better gloves if im to do more outdoor running…
  • feeling good, did a 3mi on treaadmill this morning easy run with a short 3-4min tempo.

1/31

  • did a final three mile run today at the gym for the week. was not feeling very motivated and pretty fatigued, probably from not great sleep this week. watch tells me that my HRV is low. was supposed to do a workout but I just took it easy
  • for the four mile run earlier this week, I did 4 800 hundred meter repeats. that was a pretty good workout, ran pretty close to my max heart rate

2/9

  • completed my 4th week of halfy training!! did 5 miles yesterday on the road, did it around 5pm and that was a bad idea b.c there were lots of cars. i should try to avoid side street roads unless running very early. run felt terrible at first because i had a heavy lunch only couple of hours earlier
  • next two weeks are going to be slightly higher in mileage. 8mile pace still feels a bit challenging, im getting a bit nervous about sustaining that for 13 miles… i feel like i need to extend my long runs a bit and work in actual speed sessions off treadmills to be ready. or bump up the incline? or maybe just alternate indoor and treadmill 2 miles at a time

2/12

  • finished first workout of the week, 5 miles. 1 mile warm up, 4x1000m at 7:40ish pace with 1.5-2min walk in between, 400m at 7:00 pace and remaining cool down. felts strong, wearing my hoka skyflows (varsity navy, wide) – they feel the best so far running. my olympia via and via 2 from altra both aggravate either my right big toe or right knee
  • next workout i want to shoot for a 3 mile tempo, would be nice to do it outside too but the roads are still icy.

2/17

  • did my 9mi long indoors, 3 on track, 6 on treadmill. left foot tingly at end… shoelaces too tight probably. small blister on left pinky toe. overall felt good
  • huge snow storm, may have to skip week day runs this week
  • so … i just realized that my table above actually has an extra run early in the training that means the last long run lands on race weekend, which was NOT the plan. i really only have 4 weeks left, not 5. i want to make sure i build up to 12 and have a taper, so next run will be 10. then 12. then we will taper off for 2 with a 10 and then an 8
  • for remaining weeks, i want to make sure at least one midweek run is at race pace (8min)

2/19

  • had a 5 mile workout today with 3 mile pace workout , had to cut cool down short because m blisters were so bad. left pinky toe and both sides of my big toes were very irritated. i was wearing my hoka skyflows. they do cramp my toes bit, as much as i love the support. i have a pair of injinji toe socks coming so we’ll see if that helps

running economy and vo2

running economy is a complicated topic and hard to measure, but a common measure of economy is done through vo2 (volume of oxygen) measures as a proxy. according to wikipedia, “Those who are able to consume less oxygen while running at a given velocity are said to have a better running economy”.

i put together a few visuals to illustrate this concept better.

here’s a graph showing

  • oxygen consumption or vo2 on the y axis
  • velocity in meters per second on the x axis
  • as velocity increases, so does oxygen consumption. they increase together up to a point (vo2 max)
  • oxygen consumption plateaus / steady states at the vo2max at and beyond a specific velocity

now, if the athletes is able to train their aerobic system to run at the same velocity with lower oxygen consumption, you get this graph

  • the dotted black vo2 consumption at given pace is the original line. the new solid line is as a result of training
  • same pace, but lower o2 consumption. this athlete has improved their running economy!
  • similarly, if you graph the relationship between vo2 and velocity for different athletes, the one with the lower vo2 consumption at any given pace is more economical

i also find this relationship interesting because it also tells you why increasing vo2 max is valuable. vo2 max sort of represents near maximum / max effort and running at vo2 max typically can’t really be sustained for longer than 11 minutes. right now, the athlete can only run at their max for 11 minutes. if you shift the max up, here’s what happens

  • the previous velocity is now a fraction of max, so less effort is required to sustain the same pace. they can now race at that same pace for longer! better endurance
  • the new max is associated with higher velocity. their previous 11 minute high effort pace is even faster

precise vo2 max testing is typically done in a lab hooked up to an mask that measures oxygen consumption while running on treadmill at increasing intensity. one of my favorite running youtubers / olympic athlete is luis orta (venezuelan runner). he does a vo2 max test here and gets an 80 mL/kg/min.

that is a ridiculous number because the average vo2 max for untrained individuals are around 30 – 40!

vo2 max at the end of the day is just a metric / one indicator. i used to see vo2 max videos everywhere on youtube when i first started running and it made me feel like i somehow needed to track it as part of my training. completely untrue.

jack daniels types of running training

jack daniels classifies running training into four categories (see his lectures here). i’ll summarize here because i found it to be a helpful framework for building my own training program for the new year. each type adheres to the same general principle of minimum effort for the maximum gain. he says if you want to improve physiological function, you want to stress it. but you want to stress it at the lowest intensity of stress

easy runs

  • build aerobic base and ability to do higher volume runs
  • train at max stroke volume to gradually create cellular adaptations
    • mitochondrial density
    • fat oxidation
  • 60% of max heart rate

threshold training

  • build endurance through pushing the lactate threshold. blood lactate accumulation happens at difference paces / effort levels. so goal is to push accumulation farther out relative to effort
    • accumulation is function of how much produced vs how much cleared
    • past the threshold is where speed of running beyond which blood lactate rises continuously instead of plateau
    • at or below threshold = steady state lactate accumulation (not rising)
  • train at threshold means training at pace where any faster results in lactate rising continuously
  • 82 – 88% of mhr
  • threshold is basically pace you can hold for roughly 1 hour

interval

  • purpose is to maximize aerobic power. how much blood is delivered and how much of that o2 is converted to energy
  • aerobic power is approximated via vo2 max
    • o2 consumption measured by millilitres of oxygen per kilogram of the body mass per minute (e.g., mL/(kg·min)). 
    • vo2 max is max rate of oxygen consumption
  • 97 – 100% of MHR

repetition

  • kind of like intervals (honestly not sure why he called this out separately), except the focus is on even higher intensity followed by long rest periods. purpose is to improve running economy

as you go from easy running to repetitions, the main variables within a training session that change are intensity and volume. easy runs are high volume, low intensity. on the other ends, repetitions and intervals are high intensity but low volume. this is a helpful lens through which to view running programs because the proportion of a type of training in a running program tells you the type of race or performance it’s effective for

while i really like doing threshold training, my current volume of training is low so right now i feel like i’m sacrificing base building when i really ought to aim at building more volume and developing a larger base. right now i do higher intensity training twice a week, but i may dial that back to just once a week and dedicate my other days to easy runs. it’s hard for me to do two intense sessions a week without feeling the impact on my joints / ligaments, particularly my right knee – which tells me i should probably scale back the intensity and just focus on volume

crafting interpreters chapter 17 notes – infix parsing with pratt parser

there’s a saying that all problems in computer science / programming can be solved by another level of indirection. in this chapter the pratt parser is a great example of that when it comes to parsing expressions such as

  • simple numeric literals i.e 1 or 2
  • single operand / prefix expressions like -1
  • binary expressions like 1 * 2 involving numeric, equality, comparison, or logical operators
  • any complex combination of the above with groupings

back in jlox, expression parsing was based on recursive descent expressions recursive descent. in this chapter, the parse sequence is driven by a special function called parsePrecedence. two new abstractions (the parse rule table and the rule lookup function) come together in the parsePrecedence function which is going to be the new entry point to expression parsing

static void parsePrecedence(Precedence precedence) {
  advance();
  ParseFn prefixRule = getRule(parser.previous.type)->prefix;
  if (prefixRule == NULL) {
    error("Expect expression.");
    return;
  }

  bool canAssign = precedence <= PREC_ASSIGNMENT;
  prefixRule(canAssign);

  while (precedence <= getRule(parser.current.type)->precedence) {
    advance();
    ParseFn infixRule = getRule(parser.previous.type)->infix;
    infixRule(canAssign);
  }

  if (canAssign && match(TOKEN_EQUAL)) {
    error("Invalid assignment target.");
  }
}

here’s a truncated example of some parse rules in our parse table. it’s a mapping of token types to a group of metadata (prefix parser, infix parser, and precedence level)

ParseRule rules[] = {
  [TOKEN_LEFT_PAREN]    = {grouping, call,   PREC_CALL},
  [TOKEN_RIGHT_PAREN]   = {NULL,     NULL,   PREC_NONE},
  [TOKEN_MINUS]         = {unary,    binary, PREC_TERM},
  [TOKEN_PLUS]          = {NULL,     binary, PREC_TERM},
  [TOKEN_NUMBER]        = {number,   NULL,   PREC_NONE},
};

unary is the prefix parsing function for the minus token. binary is the binary parsing function, and the precedence level of PREC_TERM. this is the getRule function that, given a token type, can retrieve that metadata

static ParseRule* getRule(TokenType type) {
  return &rules[type];
}

what’s unique about this approach is

  • the relevant parse function for a given token consumed via advance is fetched dynamically from the parse rule table. so given a token type of NUMBER for parser.previous.type, the first thing parsePrecedence attempts to do is locate the prefix function for that token
    • other prefix functions may themselves call back to parsePrecedence such as grouping if a left parenthesis is encountered
  • for chained expressions involving infix operators i.e 1 + 2 + 3, the current precedence level is used to continue consuming the following expressions in a left-associative manner. so parsing 1 + 2 + 3 becomes ((1 + 2) + 3)
  • addition of new tokens involves setting a new token rule for those tokens and their metadata (prefix operator, infix operator if it applies, and precedence level). the parsePrecedence function automatically obeys the precedence levels during parsing. in jlox, parsing precedence has to be carefully managed by ensuring that it’s reflected in the call sequence (top down execution where lower precedence parse functions calling higher precedence ones)

unlike recursive descent top down parsers where the syntax reflects both the grammar and precedence order (lower precedence parse targets always invoke higher precedence ones), it’s harder to visualize the call sequence in a pratt parser because the exact call sequence is only apparent during runtime through calls to parsePrecedence (which decides how far to parse on the current precedence). nevertheless this seems like a more extensible / configurable way to manage expression rules

purpose of zone 2 easy runs

i went for an easy run this morning and was thinking about the purpose of training and zone 2. a cornerstone of pretty much any aerobic training program is the easy (zone 2, 60-70% of max heart rate or 5-6 RPE) run. there’s usually the long easy run combined with shorter easy runs throughout the week. when i first started training for longer races (15k), i thought the sole purpose of these longer runs was to progressively overload until i’m comfortable running the race distance. so if i’m training for a 15k, i’m increasing my ability to sustain a comfortable aerobic effort little by little until i’m able to do it for my desired distance.

if i’m training for a 5k, there must not really be a purpose of doing these longer runs. right? there’s a principle in training called specificity – basically it means you tailor your training to the specific energy system and skills that you are trying to improve. so if you’re trying to become a better long distance runner, run long distances. if you’re trying to become a better sprinter, sprint! this seems pretty intuitive, except what’s not obvious is that if you want to become a better runner at any distance, you also want to incorporate long runs!

base endurance

i’m not really an expert on physiology and there’s a ton of resources covering the benefits of long runs, but my layman understanding of this so far is that doing easy runs at roughly 60% of MHR is what allows you to

  • build your heart muscle (increasing stroke volume or how much blood can be pumped per beat) with minimal effort
  • these improvements are primarily a function of duration. so, generally speaking, the longer you are working your heart at that intensity the more of the benefits (up to a point, we can’t run forever without risking injury).
  • allow your body (muscles, bones, ligaments, joints, etc) to gradually adapt to higher volume
  • by doing easy runs at higher volume without injury, you unlock higher volume of more intense workouts into your schedule. someone who is comfortably running 30 miles a week can introduce a couple of 5k intense threshold runs into the week to build even more speed and endurance. if you’re doing 5 miles a week, there’s just no room for that. nothing wrong with running 5 miles a week, but my point here is to illustrate the relationship between steady state volume and training opportunity

the minimal effort point here is pretty key. you can train a far higher intensities to build your heart muscle, but turns out your hearts current maximum stroke volume is reached at 60% of MHR. so if you do a full out run, your stroke volume is still the same – you’re just expending more energy for the same heart muscle building benefits. also since doing high intensity runs all the time means you likely sacrifice on volume aka less time overall in this zone. people are also all different – in some situations there may be runners that can do very high volume and intensity and that works for them. i know that’s not me 😀

there are also numerous other related responses that support this gradual volume buildup of the heart muscle, a couple that i notice come up often are:

  • increase mitochondrial density (mitochondria generate energy in a cell using oxygen and glucose) so higher numbers of mitochondria means being able to use more of the available oxygen and glucose during aerobic activity
  • increase in ability to use fat stores as fuel instead of glycolysis, using glucose and oxygen (able to run longer)

so overtime, spending a lot of time in easy runs builds the heart muscle and its ability to pump out blood and increases your capacity to make use of that higher volume of blood per beat thanks to cellular level changes like mitochondrial density (more efficient). how this translates to races is that you’re able to do them at any distance without getting as tired because your aerobic system is more efficient. and because of the gradual buildup in your overall muscular strength you can run at higher volumes at a comfortable pace per week. this higher mileage then unlocks higher quality / higher volume intensity training.

jack daniels, a well known running coach, often says that you should know the purpose of your training. why are you running today? what is the purpose of this long run? well there’s the purpose of long runs. you do long easy runs because it builds the very foundation of your aerobic performance.

dyson v11 trigger repair & tips

back in January this year i ordered a refurbished dyson v11 off newegg (the full model name is V11 Animal+ Cordless Vacuum) for about $300 (new ones were close to $600) and it was working great up until end of November last month. the problem was that the trigger had stopped working – it wasn’t springing back into its normal position after depressing and wouldn’t turn on the vacuum anymore.

turns out this broken trigger on the v11 is a well known issue and it’s caused by a weak plastic arm / lever on the trigger assembly. it’s frustrating because why the hell would you made such a high use component that get subjected to repeated force out of thin plastic instead of metal? or at least make the plastic arm thicker so it doesn’t just crack in less than a year of use.

thankfully because this is such a common issue there were repair tutorials online and spare parts available through ebay. i was able to finally finish the repair yesterday and in this post i’ll share what resources i used and some tips (both for others and for myself in the future if i need to do this again…)

here’s the youtube video that documents the disassembly process and required tools. just a heads up, the trigger mechanism is embedded pretty deep and requires basically an entire disassembly of the vacuum. the video is less than five minutes long but i think it took me closer to 45min to get it all apart.

tips

you WILL need all the tools mentioned in the video. definitely the long torque screw and pliers. you won’t be able to remove the trigger assembly without a pair of pliers (i tried). it will also be helpful to have some kind of gripper (things that look like tweezers but for electronics, most electronic repair tool kits will come with this) to grip on to wires later during re-assembly

buy a new complete trigger assembly with metal switch (or at the very least a metal trigger piece to replace the plastic trigger with). yes it’s pretty funny that there’s apparently an entire market providing more durable switches for the v11 than dyson themselves. in my first go at this, i did what the video suggested and tried gluing the broken trigger with superglue. i do not recommend doing this because the trigger ended up breaking immediately again and i had to repeat the entire process. maybe i didn’t let it cure long enough. maybe my super glue wasn’t super enough. whatever, just save yourself the trouble and replace the entire assembly. below is an image of one i found on ebay (note that it says v10 – it’s also compatible with v11).

during reassembly, there will be a point where you need to straighten / bend the metal ends of the electric connectors in order to pass it through various parts of the vacuum. you’ll know what i’m talking about if you end up going through the full disassembly. try not to bend/re-bend them too many times because you can easily break off the metal ends (see below)

in my first pass at this after i had glued the trigger back together, i actually broke off the metal piece by accident when trying to bend it back and then spent over an hour trying to re-solder it back on. i also have no idea how to properly solder and ended up burning a hole in my table cloth. anyway when you’re re-connecting those metal connectors back, use your pliers to adjust them to be close to 90 degrees (as they were before you had to remove them) but it honestly doesn’t have to be perfect. just use the screws to tighten them against the motherboard.

lox vm scanner

in chapter 16 for the lox vm, the scanner implementation takes on a completely different approach compared to jlox. when we implemented jlox, the scanner did a full scan of the source file and then created all the tokens in memory for the parsing phase

in the C implementation, the file is still read but we don’t create a separate list for all the tokens by doing a full read of the file. instead the scanner refers directly to the source and we only create as many tokens as necessary (no more than 2 tokens since lox is a LLR1 type grammar that only requires a single token lookahead to uniquely identify a lexeme). this is a lazier and more memory efficient approach.

for example, here’s the scanner struct and how it’s initialized

 typedef struct {
   const char* start;
   const char* current;
   int line;
 } Scanner;

 Scanner scanner;

 void initScanner(const char* source) {
   scanner.start = source;
   scanner.current = source;
   scanner.line = 1;
 }
  • start refers to the beginning of a lexeme (say, an identifier)
  • current is the current character being scanned
  • there’s also some additional metadata like line number for debugging support

and this is the Token struct for representing a complete lexeme

typedef struct {
  TokenType type;
  const char* start;
  int length;
  int line;
} Token;
  • start is a pointer to the source – again we’re not allocating additional memory to hold token information
  • type is our special enum to things like TOKEN_IDENTIFIER

with the scanner and the token structs in place, the compiler drives the actual changes to these objects as it scans as much of the source code as it needs (and constructs tokens) to emit byte code sequences

ObjFunction* compile(const char* source) {
  initScanner(source);
  Compiler compiler;
  initCompiler(&compiler, TYPE_SCRIPT);

  parser.hadError = false;
  parser.panicMode = false;

  int line = -1;

  advance();

  while (!match(TOKEN_EOF)) {
    declaration();
  }

  ObjFunction* function = endCompiler();
  return parser.hadError ? NULL : function;
}

calls to advance and declaration both will eventually call out to scanToken which will make use of the scanner to read and construct the next token. for example if the token is a number, the compiler will emit two byte codes via a call to emitConstant(NUMBER_VAL(value));

the entire sequence of bytecodes is built this way, the compiler driving the scanner forward and emitting byte code sequences on the fly.

migrating a rails app from mongo to postgresql

my team and i recently completed a database migration from mongodb to postgresql for one of our rails apps. the service is a graphql api built on rails 7 and is backed by a mongodb database (m40 cluster managed through mongo’s atlas platform) with ~500gb of data and we performed a live zero-downtime migration to a db.m5.2xlarge RDS running in our own aws account . the application is organized like a pretty standard rails app. all data is represented by rails models and data access is done through an object mapping layer using mongos object document mapper (ODM) mongoid.

the requirements for this project were pretty straightforward

  1. stop using mongo
  2. dont take our service down to do an offline migration (given the amount of data we needed to move, the maintenance window we would need would’ve been way too long anyway based on some of our initial test)

our high level approach was to use the double writing pattern by dual writing to both data stores and put reads behind dynamic feature flags, backfill the tables one collection at a time, switch over the reads to the new database and then cut off the old read and writes.

this is a very common technique in service to service migrations when teams undertake monolith to microservice transitions (which were all the rage five to ten years ago, but the trend is reversing as of late) and the same process can be applied to switching data stores within the same service. the new reading/writing code in the service hit a new storage instead of the new api / service.

setup phase

we started by setting up an initial connection to postgres and added some basic tooling

  • set up the postgres database and the rails integration. our infrastructure teams spun up our new postgres instance on RDS sized comparably to the current storage on atlas. in the rails app, we setup active record ORM alongside the existing mongoid ODM and updated both our development and CI setup to spin up a postgres image
  • set up data transfer / backfilling utility scripts that extract mongo document data for a given collection and transform it into an postgres compatible format and inserted it into the postgres database. for example, nested documents become normalized foreign key relationships
  • set up feature flagging (we used flipper) to dynamically control the reading switch (double writing was not behind switches but we made sure to wrap our new writes with catch-all exception handling to never interrupt requests

double writes

we divvied up most of the work by resource types and tackled them in the order of some combination of entity complexity (lots of relationships, super nested) and data volume (getting an early start on the largest collections was important since we had deadlines to hit).

for each resource in the system, we did the following

  • create active record equivalents of the current ODM models. so this means bringing over model level unit tests, validations, and any database level constraints. to uniquely identify migrated data, we made sure to include a mongo_id column on every new table
  • set up dual writes. most of the writes happen through graphql mutation resolvers at the graphql API layer so this involves adding adjacent active record write logic.
  • duplicate existing unit and functional tests to cover the new models and code
  • set up the backfilling code. the shared migration script was sufficient for most of our data (simple batch read, transform, bulk insert), but a handful of our models with more complex entity relationships necessitated their own migration logic

backfill and read rollout

  • once dual writing was enabled for a while and we’re confident there are no issues with the new data, run the backfill scripts. depending on the collection, this took anywhere from minutes to days
  • upon backfill completion, verify the successful migration using a custom built data verifier script that ensures that all the mongo documents were successfully transferred. this script knew how to compare both simple flat docs and ones with very nested relationships by using rails model level reflection API
  • finally, switch the reads from mongo to postgres. this was done through flipper so no additional deploys are necessary

cleanup

  • once all dual writing is setup and all reads are done against postgres, remove the double writing and only keep our postgres active record reads and writes.
  • remove all traces of mongo
  • celebrate!

challenges

no project is without its challenges / setbacks and wow we had a number to deal with (and overcome!). we had issues on every stage of the sdlc

  • coordinating with other teams making changes to the service. we had to enact a code freeze since we were running into instances of people introducing new writes without the flags/dual writing stuff we required
  • wading through hard to understand business logic areas with low test coverage. we needed to create active record equivalents of a lot of writes, but some writes were fairly complex (very stateful, lots of conditions) and involved a coordination of multiple domain models
  • keeping the new active record models ,tests, scripts isolated. we can’t just delete the current application code so the new models needed to live alongside the old ones. since we wanted to preserve the model names as possible but you cannot have two models of the same name in models/ so we introduced a postgres namespace across the board to house the new code. this was a fantastic solution that made it both easy to add new models and delete the old ones later
  • database schema migration automation problems. we initially were running the new rails schema migrations by hand but when we switched over to automating the schema migration using k8s/helm, we accidentally made migrations run one off jobs (instead of pre-release hooks). as a result, we had deploys still succeeded despite failing migrations
  • some of our collections are large, so our backfill scripts need to run anywhere from several hours to several days. this increases the likelihood of running into issues mid data transfer, so it’s important for scripts to be idempotent and resumable. for the idempotent part, we did this by adding mongo_id primary key reference to all of our postgres tables to represent the identity of the mongo record migrated (in most backfilling instances with only a couple of exceptions, we skip the insert based on the mongo id if it’s already migrated). for resumability, during migration we always read mongo documents ordered by their primary key (lucky for us the first four bytes of the 12 byte id is the creation timestamp) and we log out the last key in the current batch during migration processing as a checkpoint to use later as a cursor
  • set off alerts when running backfills because of elevated read / writes against postgres which were in the call path of all existing requests. we ended up creating a read only mongo replica off of our primary in atlas to use for our backfilling. unfortunately, while this solved the contention issue we introduced new problems around data consistency. for example there was an instance where i ran the backfill against an outdated replica and ended up inserting stale records into the new database. luckily the verifier detected missing records and i was able to drop the table and re-run the backfill with a fresh / up to date database instance
  • missing mongo key constraints and existence of duplicate records. we had a number of collections containing dupes due to missing uniqueness indices, so when we added the appropriate uniqueness constraints to the new tables in postgres, the backfilling process blew up because the mongo data was bad. this required some data cleanup and one of my teammates wrote a handy de-duping script using mongos aggregation API to identify and remove dupes by gathering dupes for a any given document key combination into lists and then keeping the latest by purging the dupes.
    • one minor snafu we ran into this was that the aggregation code does a lot of the grouping of documents in memory on a node and in one instance this caused a memory spike that impacted avg performance while the script was running
    • based on the logs, we seem to get a good number of duplicate insert errors due to race conditions of requests attempting to modify the same resource at the same time, which probably explains why we had so many dupes in the old database to begin with. most of these cases can be ignored but it would be good to figure out why they’re happening so often
  • bad new data being inserted into our postgres database due to incorrect new code. for example, there was a situation where we were writing a UTC offset attribute into mongo through the ODM and when this got carried over to the active record class, it was only writing positive UTC offset values and excluding all negative offsets due to a bad guard clause i added. oopsie
    • we also had minor and more suble bugs like timestamps not being properly updated. for example in active record we needed explicit .touch to update when no attributes changed but clients expected an updated timestamp. this was happening out of box with mongoid
  • data divergence happening in dual writing code during upserts that were caught by the verifier. for example, some records had fields that accrued values over time, but once dual writing got introduced and it got executed by a new request, only the most recent data in the payload is inserted into the new database (the original values accrued on a field in the mongo database were not carried over). unfortunately, this data gap wasn’t addressed by our backfilling because our backfilling code skips dual written records, so the historical values were never carried over for that record during that process.
    • to illustrate this with a scenario: lets say a mongo record was created before dual writing and it’s field values gets value 1. time passes. we release the dual writing code. a new request wants to upsert the same record but this time with value 2. two writes happen: one to mongo, which ends up with [1,2] and one to postgres, which only has [2] (the most recent value).
    • to fix these issues, we wrote one off data sync / repair tasks to fix these diverged records. this was pretty much an issue for any record that performs upserts and whose backfilling strategy was an insert_all (skips on conflict) are candidate for divergence.
  • contending with ongoing performance problems of the service trying to differentiate between whether degraded performance impacted by our new code or what was already there (turns out a little bit of both!)
  • on rolling out a read for a single high traffic collection, the entire service went down for a solid 5-10 minutes where i couldn’t access the flipper UI because none of the pods were responsive. turns out this was caused by missing indexes that was causing RDS CPU to be pegged at 100% due to full table scans happening in RDS against the collection

we did a pretty great job managing these issues as a team and right now we’re fully on postgres and it looks like it’s running smoothly so yay!

runners knee update

good news! the runners knee pain that i was experiencing back in november is no longer an issue. i’ve been clocking in 13-14 miles and slowly building back up to 15/16 miles per week the past two weeks and i haven’t been experiencing any pain around my patella. granted, i’ve been mostly been using assault treadmills at the gym (i got a 1 month membership to avoid the ice and snow of december) so that’s lower impact but i’ve also been running harder than usual so maybe it cancels out. I did spend a couple of weeks before that outside too so there’s good reason to think i’m pretty well recovered.

the funny thing is i think the thing that actually helped me was taking an entire week off running and ONLY doing strength training instead of doing both low intensity running AND strength training (specifically ones for quad strength building and my adductors). trying to do both was not actually working for me – i don’t think it was enough for the inflammation around my knee to actually subside. i live in a very hilly area so in reality even though i was doing low intensity, slower pace running i think i was still putting too much load on my knees.

so there you go, taking an entire week off running and focusing only on rehabilitation exercises was what finally helped. anyway here’s to another year of hopefully injury free running in 2026, peace.

my knee self diagnosis: patellafemoral syndrome (runners knee)

since increasing my weekly milelage from 10 to 16 i started noticing mild pain on the medial sides of both of my knee caps (my right more so than my left). i also added superfeet arch insoles into my shoes at around the same time, so that may have also affected my running mechanics.

from what i’ve been able to research, the most likely culprit is patellafemoral syndrome aka runners knee given the proximity of the pain to the knee cap. it’s on the medial side just underneath the knee cap. this hasn’t really seriously affected my daily mobility or even my running since it’s very mild, but it’s something i want to make sure i nip in the bud before it develops

here’s a table of common causes and which ones i believe apply to me

causeapplies?
kneecap misalignmentdon’t know
overuse most likely. 10 -> 16 is a 60% increase! recommended is closer to 10 – 15%
injury or traumano
week thigh musclespossible – i haven’t incorporated quad strengthening into my routine yet
tight hamstringsunlikely, esp. because i stretch these during yoga often
tight achilles tendonsmaybe, i don’t stretch my achilles
poor foot supportcould be affected by my new arch “supports” that may be throwing off my normal gait
feet rolling inmaybe? most of the roads and sidewalks i run on have camber/slope. when i run on the road, i run on the left so there’s a leftward slope which i’m sure affects my foot roll motion

of this set of causes, the top ones are likely

my current recovery plan is

  • incorporate quad strengthening exercises with focus on compound movements
    • sumo squats
    • bulgarian split squats
    • squat jumps
    • lateral jumps
  • stretch calfs and achilles post-run
  • reduce weekly miles from 16 to 14 or even back to 10-12 per week
    • shift my current 8,2,3,3 pattern to 4,1,2,3 (halving my long run, then progressive increase throughout week)
  • remove the arch supports from my shoe (it’s an extra variable i don’t want to keep around…)
  • icing knees at end of day to reduce inflammation
  • knee cap mobilization exercises, also EOD

i’ll do another report in 3 weeks and let you know how it went!

xor and mod 2

so there’s a interesting property between the XOR operation and mod 2

turns out, the xor (^) of any sequence of bits is equal to the sum of those bits modulo 2

for example

1 ^ 0 ^ 1 ^ 1 is the same as (1 + 0 + 1 + 1) % 2

if you take this step by step, the xor side:

1 ^ 0 = 1

1 ^ 0 = 1

1 ^ 1 = 0

0 ^ 1 = 1 (answer)

the modulo side:

1 + 0 + 1 + 1 = 3

3 % 2 = 1

why?

lets look at the truth table for XORs using two bits

left bitright bitxor result
000
011
101
110
xor table

XOR is an exclusive OR, so it will only be 1 if there’s ONLY ONE bit that’s on. if there’s two bits or no bits, the result is 0. what other operation of two operands where the result is 0 given 0 and 0 and 1 and 1? modulo 2!

this equivalence exists because when we’re dealing with two bits, their sum is 2. 2 mod 2 is 0. when both bits are 0, the sum is 0 and 0 mod 2 is 0. when only one of them (and odd number) is on, we always get a sum of 1 and 1 mod 2 is 1

even though we’re only looking at two bits, this actually generalizes to any sequence of bits because it turns out that XORing any sequence of bits results in 0 when there is an even number of 1 bits and 1 when there is an odd number of 1 bits (or none)

short vs middle vs long distance

ever wondered what it means for a runner to be a “middle distance” or “long distance” runner? in the running / racing world there’s three main categories of distance events that differ by distance ranges

short or sprint distance

these are traditional 100 meter (100m), 200m, 400m, and the 4x100m and 4x400m relays. these are pretty much purely anaerobic events. anything beyond 400m is in the middle distance category where the running starts to demand both high aerobic and anaerobic work

medium distance

common track distances are the 800m, 1500m, milers (1609m) , 3000m and the steeple chase variations involving obstacles and water jumps. anything beyond 3000m is going to be long distance

long distance

this is where my current comfort level is with running, although i do most of my higher intensity work in the short distances. common races in this range are the 5000m or 5k (though some people also consider the 5k a medium distance event), 10k, half marathon (21k), marathon (42k), and beyond (ultra marathons) like a 50k (31 miles). pretty much most road racing and cross country running fall into long distance category.

the longest official race i’ve run so far is a super popular local 15k (https://www.boilermaker.com/). i’ve been running this race in the last 3 years. my impression is that the 15k is not a common race distance (compared to the 10k) because when i share this with people they always express surprise that such a distance is even a thing. my goal next year is to run a half marathon, so hopefully that will be my new long race record!

boilermaker fun fact: the boilermaker actually draws a good number of elite international runners – this past year the winner was john korir of kenya who’s one of the current top 10 marathon record holders!

boilermaker fun fact 2: not sure if this is verified, by i learned this through my wife. the event takes place in july, which seems odd because it’s a distance event that’s smack in the height of summer heat. but this is a couple of months before the marathon majors in the U.S (nyc, boston, chicago…) that run between september – november, so this off season schedule suits international runners that are training for the majors. i think this sort of makes sense because if they stuck the race in november, there’s probably going to be a non-existent elite pool…

anyway, here’s an easy / quick way to remember these ranges

short distance – up to a single lap on a standard outdoor track (400m)

medium distance – up to a 3k / two miles / 8 laps on a standard outdoor track

long distance – everything else

running training load update

i’m currently working on running a consistent weekly mileage of 16 miles this year and hopefully making my way up to 20-25 by the beginning of next year.

so far … it’s been going mostly good. a couple of weeks ago following a 5k race (i hit a pr of 23:57 at a 7:43 mile pace!) i started experiencing some very mild symptoms of runners knee / patella-femoral syndrome (more so on my right knee, towards the medial underside of the patella) but it seems to be subsiding / not getting worse over time. i’ve been trying to loosen up my quads a bit with rollers to see if that helps but i’ll keep an eye on it

my current training schedule is:

sundaylong run (8 miles)
mondayrecovery / easy run (2 miles)
tuesdayrecovery / strength training (lower body) (3 miles)
wednesdayeasy run combined with a workout like strides or tempo
thursdayrecovery / strength training (upper body) (3 miles)
fridayeasy run combined with a workout like stride or tempo
saturdayrest / recovery. no strength training to prepare legs for long run the following day

this schedule is basically identical to the boilermaker 15k training program that i’ve been following for the last 3 years (very inconsistently). in my first two boilermakers i ran with my wife and we did about an avg 12min pace and finished in just under two hours. this year in july i ran by myself and finished in 1:28 at a 9:31 mile pace.

the key thing about this training schedule is that it follows a mostly low intensity, 80/20 philosophy where at least 80% of the runs are easy runs and at most 20% is high intensity. with 16 miles per week, 20% is about 3 miles and that’s how much time i try to spend in higher intensity running distributed between tuesday and thursday. outside of that, i try (but not always successfully…) to stick to an easy pace of 10-11min mile.

there’s a couple of tweaks i’d like to start making to my running moving forward to hopefully reduce any risk of injury and improve my overall enjoyment of running

  • adopt RPE (rate of perceived exertion) as primary measure during my runs instead of glancing at my watch first to gauge effort based on pace or heart rate. i run on hills often and sometimes focusing on pace causes me to go much faster than i should for easy days
  • pick a couple of specific and recurring workouts for my run workout days on tuesday and thursday. right now it’s a bit make up as i go and i’d like to just remove that decision making on the day of

hexadecimal notation

hex notation shows up a lot in computing so it’s really useful to understand. it’s really hard though to learn to take your base 10 lens off because that’s what we’re so accustom to!

in base 10 position notation, each place represents up to 10 digits (0-9). this is really handy because when we go beyond 9, we can shift over and use a new position to denote 9 + 1. so the value of each position in a base 10 integer is essentially the radix (10) raised to the power of the position index which starts at 0.

for example, the digit symbol 8 below represents the value 8 because every digit is below 10. once you most leftward to a new position, each digit actually represents 10^1 all the way to the leftmost position 10^n.

the same set of digits for a base 16 system ends up looking the same, but the actual value is different. below, 128 in base 16 is 296. from right to left, 8 + 32 + 256 = 296. this is because rather than representing 10 symbols in each place, hex holds 16 symbols

in base 10, each place holds one of 0,1,2,3,4,5,6,7,8,9. in base 16, each place holds one of 0,1,2,3,4,5,6,7,8,9,A,B,C,D,E,F where A = 10 (base 10), B = 11, C = 12, D = 13, E = 14, F = 15. so A in base 16 is equivalent to 10 in base 10. when looking at this for the first time, it looks wild because you’re so accustomed to equating the symbols “10” with the value “10” (both in base 10), so switching bases really requires you to decouple the numerical symbolic representation (may or may not be base 10) from the value (which you still want to think about and write in terms of base 10).

one of the handiest things about hex and why it’s commonly used in computing is its relationship with binary or base 2 notation. machines encode all information in binary format. compared to decimal and hexadecimal, binary notation only holds 2 values in each positional index (0 and 1). the interesting relationship, though, between binary and hexadecimal is that 16 is actually 2 raised to the 4th power. put another way, we can represent any single hexadecimal value with four binary values and vice versa. this makes converting values between the two bases much easier than converting between binary and base 10. i highly recommend checking out this khan academy video to gain a intuition behind the why

thanks to this relationship, we can use hex as a far more compact literal representation of binary values. while binary is the most efficient for computers, writing in hex makes it far easier to write and read for humans. for example, the bits 1111 can be represented with just F since they both represent the value of 15 (decimal). four bits can represent up to 15 values. what else represents 15 values? a single hexadecimal digit! and since hex is a power of 2, we can expand this beyond just four bits – we can pretty much use hex to quickly convert really any sequence of bits in most computing architectures whether they’re 32bit (8 groups of 4 bits) or 64bit

how do you inspect a shell-less docker image?

a common task i do is open bash in a container to inspect the file system….

but what happens when there is no shell at all in the image?

for example

FROM scratch

WORKDIR src

COPY README.md .

and if i run docker build . -t minimal-image to build the image, how would i confirm the contents were indeed copied over?

if i run docker run minimal-image:latest bash, i get

docker: Error response from daemon: failed to create task for container: failed to create shim task: OCI runtime create failed: runc create failed: unable to start container process: exec: "bash": executable file not found in $PATH: unknown.

this makes sense because the scratch image doesn’t actually contain anything. it’s not shipped with a bash interpreter.

so what to do…

the workaround is to use the docker export command. this requires a container, so first build the container

docker create --name minimal-container minimal-image:latest echo "hello world"

and then we can finally export this to a .tar file

docker export minimal-container -o out.tar

now lets unzip/decompress the tar into a directory called tmp. if i don’t specify a destination directory, the contents will get unzipped directly into my current directory, which includes my host files! don’t want that 🙂

mkdir tmp && tar -xzf out.tar -C tmp

this gives me, with ls tmp

dev
etc
proc
src
sys

now before i had my WORKDIR image instruction to set the working directory to src right before my COPY instruction, and that is indeed where i find the file i copied.

anyway that’s how you inspect contents of an image without a shell!