stateful vs stateless JWT’s

JSON Web Tokens (JWTs) are cryptographically signed JSON objects. The crypto signing is what provides the trust guarantees since consumers of a JWT can verify the signature using a public key. Now there’s two types of JWT’s: stateful and stateless jwt’s.

Stateless JWT’s are probably the most common JWT. All the information needed by the application about an entity is contained in the JSON (username, role, email, etc). When a stateless JWT is transmitted in a request either via cookie or header, the application base64 decodes the JWT, verifies the signature and takes some sort of action.

Stateful JWT’s contain a reference to information about an entity that is stored on the server. This reference is typically a session ID that references a session record in storage. When this JWT is transmitted to the backend, the backend performs a lookup in storage to get the actual data about the entity.

There’s almost no good reason to use stateful JWT’s because they are inferior in almost every way to regular session tokens:

  • Both session tokens (simple key-value string pair) and JWTs can be signed and verified by the server, but that whole process is simpler (and more battle-tested) with session key value pairs compared to JWT’s which involve an additional decoding step as well as plucking keys out of the decoded JSON for the verification step. Since most applications rely on third party JWT libraries to handle this (and they greatly vary in their implementation and security), this further increases the vulnerability of JWTs.
  • Encoding a simple string in JSON adds additional space usage. There’s nothing to be gained from this extra space if you’re just transmitting a single value.

Stateless JWT’s from my experience are most commonly used in service-oriented and microservice architectures where there are some collection of backend services and frontend clients. There’s typically a central authentication service that talks to a database with user information. Client requests from the frontend are authenticated with this service and receive a JWT in return. They may also need to communicate with backend services that are behind access control, so they pass along the stateless JWT as a bearer token in an HTTP header. The backend service verifies the JWT and uses the claims information to make an authz decision.

The reason why this is a popular flow is that teams are able to act on the JWT without having to perform an additional lookup about the entity at a user service. They do have to verify the JWT, but they don’t need to maintain any additional state about the user or communicate with another service. This level of trust only works because JWT’s are digitally signed by an issuing party (in this case, the auth server). If clients are just passing plain JSON requests, there’s no way for services to verify the integrity of the information (has it been tampered with?) or the source (how can i be sure it’s issued by the auth server?).

jwt as session

One common argument against the use of stateless JWT’s is when they’re used as sessions. The application basically offloads session expiry mechanism entirely to the JWT. The two strongest arguments i know of against using jwt’s as sessions are around data freshness and invalidation.

If you’re using a stateless JWT as a session, the only way to really expire a JWT is to set a new JWT on a client with an expiry in the past or change the issuing key (which invalidates all sessions). Fully client-side cookie sessions have similar limitations around invalidation. Data freshness is another related issue – when invalidation is hard, so is updating. If you need to revoke permissions, you can’t really do that without updating the JWT. But again, you can’t do that if you’re relying on the client-side state of a JWT for session management.

why can’t you tamper with a JWT?

jwt tokens are a very popular way of transmitting claims information between systems. It’s based on a public key system so that the claims can be verified and the verifier can be confident that the claim was issued by a trusted entity.

microservice architectures will commonly use the claims to perform access control. For example, the claim may contain a users ID and their roles. This information can then be used to allow or deny access to resources.

One question that inevitably comes up when implementing JWT flows is:

How can I be sure that this JWT isn’t fake? How do I know it’s not tampered with??

if you don’t verify the signature, you really can’t be sure. JWT tokens contain a “signature” which is the output of a cryptographic hashing algorithm such as RS256. The issuer of the token will hash the header and payload of the JWT using a one way hash. This hashed output is then encrypted using a secret and then the final output gets stored inside the token. So what gets stored is an encrypted signature. If anything about the contents of the JWT changes, the signature will change.

on the receiving side, the only way to trust the token is to verify the signature. First, the signature in the claim needs to be decrypted using a public key (this is usually made available by the issuer). If you can successfully decrypt this value then you can be confident that the token was issued by the trusted party. However, at this point you haven’t verified if the contents have been tampered with / changed.

to verify that the integrity of the actual payload, you need to perform the same hash on the header and payload and compared the hashed output to the claim signature. If they match, you now have confidence that the claims were not tampered with! So there’s two levels of verification that happen. The first is the successful decryption of the claim. If decryption fails, the claim must not have been issued by the trusted party. For example, if I generated a JWT using some random secret key, it can only be decrypted by a specific public key. If I don’t share this public key with another party, they cannot trust me. So if a service is unable to decrypt using the public key it has, it cannot establish trust.

by the same token (hehe), if the verification of hashes fail, it’s possible that the token was issued by a trusted party but the contents of the JWT changed or does not match what was used to generate the original signature. This is a sign of tampering – either by another party or even by accident by the JWT consuming service (perhaps there’s a bug in the signature verification code).

what the reflected XSS

reflected XSS attacks are a common way of tricking a users browser agent into executing malicious code. I’ll share onedefinition I found from mozilla and unpack the key terms / concepts.

When a user is tricked into clicking a malicious link, submitting a specially crafted form, or browsing to a malicious site, the injected code travels to the vulnerable website. The Web server reflects the injected script back to the user’s browser, such as in an error message, search result, or any other response that includes data sent to the server as part of the request. The browser executes the code because it assumes the response is from a “trusted” server which the user has already interacted with.

https://developer.mozilla.org/en-US/docs/Web/Security/Types_of_attacks#cross-site_scripting_xss

there’s a few key phrases here that are important to understanding XSS:

  • malicious link – this is the link sent by the attack to a user of a web service. Lets assume you’re the user and the service is my-bank.com. This link may look like “my-bank.com?alert=<script>… malicious code …</script>” which contains code that the attacker wants to execute on your browser.
  • malicious site – this is a site owned by the attacker. A malicious site doesn’t need to exist for an attack to happen, but it’s one place an attacker can get you to submit details that they can use to construct a scripted attack. For example, lets say they need your email address to perform an attack. The site might have a fake form that collects your email address and then redirects you to “my-bank.com” with an embedded URL script.
  • vulnerable website – this is the site that is vulnerable to XSS attacks. In general, that includes any site that doesn’t escape / sanitize inputs from the client. The problem with this is that it can lead to the browser agent executing user provided (via an attacker) javascript.
  • web server – this is the bank services backend service
  • “trusted” server – this is the bank server that returned the HTML containing malicious code that the users browser executed. Trusted is in quotes here because the server is returning javascript that it did not intend to. So it can’t really be trusted.

XSS attacks take advantage of:

  • A web service that liberally accepts user provided inputs (an attacker can replace a safe input with client code) and renders that input without sanitization (permits arbitrary code execution via injected script tags)
  • An established trust between the user agent and the web service. Any malicious code may execute in the context of an established session between the user and the service. For example, the user may be logged in and therefore all requests originating from the page (which will contain auth related cookies like session cookies) are trusted by the backend.
  • An unsuspecting user that blindly clicks a link (perhaps emailed to them) or fills out a form on a malicious website

err how is this different from stored XSS?

the only difference between reflected XSS and stored XSS is that with stored XSS, the malicious code is actually stored on the vulnerable web services servers. For example, lets say twitter is the vulnerable website and you’re a twitter user. Now lets assume you’re following someone who’s an attacker and they submit a tweet containing malicious code that they know will be executed by browser agents when it gets rendered, say, in the newsfeed of followers.

so here’s how it goes down – they submit the tweet containing script code. That tweet gets stored on the twitter servers (this is where the word “stored” comes from). That tweet will be rendered in users news feeds and when it does, the contents of the tweet gets executed as javascript. Boom, that’s the XSS attack. Just like reflected XSS, this can be prevented by ensuring that user input (in this case, tweets) is sanitized.

Additional resources

  • https://www.stackhawk.com/blog/what-is-cross-site-scripting-xss/
  • https://developer.mozilla.org/en-US/docs/Web/Security/Types_of_attacks#cross-site_scripting_xss

most devs are not the audience for the CAP theorem

eric brewer presented CAP at a distributed computing conference in 2000 to designers of distributed systems, most of whom were familiar with relational databases and their consistency guarantees. His intention was to start the conversation about trade-offs in the system design space that need to be made between consistency and availability for early cloud-based storage systems that needed to be highly available. His main argument was that systems that for cloud storage systems to be highly available, some level of consistency needed to be sacrificed.

most developers like myself are using distributed systems – not designing them. more importantly, i find that i’m often required to think about app data usage patterns and how their storage system supports those usage patterns at levels far more granular than what CAP offers. as a distributed systems user, CAP can basically be reduced to “there’s a trade-off between availability and consistency”. that’s a concept that i find much easier to digest on its own

more useful questions / concepts than CAP to grapple with are…

  • is replication synchronous or asynchronous? Is this adjustable?
  • In the event of node or network failures, what is the data recovery process like? what writes get rolled back?
  • is there support for transactions? what level of isolation is supported (are dirty reads possible?)
  • can I read my own writes?
  • how do I scale reads and writes as my system grows? what does the process of adding additional nodes to the system look like?
  • how are concurrency conflicts handled / when there is contention over shared data?

mongoDB majority read concern is confusing AF

one misconception of mongos read concern: majority is that it guarantees read-your-write consistency by reading from a majority of nodes. this is a common misunderstanding because it’s counterpart write concern: majority requires acks from the majority of nodes.

… but that’s not at all what read concern does!

reads always get submitted to a single node using a server selection process that takes into account your read preference (primary, primaryPreferred, secondary, etc).

  • if you have a primary read preference, a read will always go to the primary.
  • if you have a secondary read preference, a read will get submitted to a single secondary. if you have two replicas, a read goes to one of them

when the read concern (not the same as read preference, great naming mongodb) is set to majority, that’s basically saying “only return data for this query that has been committed / successfully written to the majority of nodes”.

this does not mean that you’re always reading the latest write

to understand this, you need to understand mongo’s notion of a “majority commit”.

the majority commit value for any write is determined by the primary during the standard replication process. when data gets replicated to a secondary node, it’ll check with the primary whether to update it’s “majority commit” snapshot of the data. If that value has not been majority committed, the majority commit snapshot will maintain its previous majority committed value.

that’s why it’s still possible to read stale values with read: majority. The majority commit snapshot on any given node is only updated for a particular value when it is actually successfully replicated from the primary to the majority of its secondary nodes. so the node you’re reading from may not have the majority-committed version of the data you literally just wrote. it’ll give you the previous majority committed value, instead of the latest value that perhaps has not yet propagated to the majority of nodes.

errr so what’s the point even MONGO

what it does mean is that you can trust that the data you’re reading has a high level of durability because in the event of a failure the value you’re reading is unlikely to be rolled back since it’s been majority committed.

read your own writes for real

reading your own writes is a special case of causal consistency and while having a majority read concern is not sufficient, it is a necessary component of setting up read your own write consistency in mongo.

to achieve reading your own writes, you need to ensure the following mAgiCaL settings:

  1. operations are done inside a session with causal consistency enabled
  2. write concern is majority
  3. read concern is majority
  4. lol?

why do you need to set specific read and write concerns even though causal consistency is enabled? I’m not sure, but it’s an extremely confusing and misleading API

theoretically/formally speaking… causal consistency includes read your own writes consistency, but in MongoDB enabling causal consistency is not sufficient for reading your own writes!

if you do have these settings on, MongoDB will track operations with a global logical clock and your reads will block until it’s able to read the most recent majority committed write from the same session.

without causal consistency enabled, a write may go to a majority of nodes but the read may still would up returning non-majority-committed data from a node that does not have the write that just happened in the same session. The causal consistency session is what causes reads to block if it attempts to read a stale write.

wow this sounds painful what are my other options

situations involving multi-document operations are most likely going to bring up requirements around read/write consistency. the best way to skip all that is to use mongo / nosql as it was designed and focus on single document, atomic operations that do not require you to interleave read and writes. this means modeling your data in a de-normalized way and not treat mongo too much like a relational db. it’s actually what they recommend too!

OR just dont use mongo. postgresql4lyfe

SAML and OAuth purposes

i’ve been thinking about saml and oauth a lot lately due to work. i’m putting down some of my thoughts here on how i believe they differ in purpose.

lets say there’s COOL WEB APP with a typical authentication using email and password. these creds get transmitted in a HTTP request and server backend verifies the credentials and grants access to user (hopefully user credentials are stored in an encrypted form…).

but there are many situations in which you may not want to be the one in charge of creds. maybe you’re scared. or maybe you want potential users to log in using their existing account with a different service (say, Google or Facebook) for convenience. or both.

if the user authenticates with a third party service (that you trust), you are basically delegating identity verification to a separate service. this is sort of the point of both SAML and OAuth at a high level, but they’re designed for different problems.

oauth

implementing OAuth in the web app lets users of that app access the service through a trusted third party that does the auth. The user goes through a login flow with the third party and eventually the third party grants a token that your app can use to retrieve user information (such as email) that can be used to grant access to the COOL WEB APP.

as a user, i might access multiple services with different oauth flows. if i use service A, i might oauth with my google login. if i use service B, i might oauth with my facebook login. but what if i want to be auth’d into both service A and service B using single auth step AKA single sign on. i’m lazy and i don’t want to use different oauth flows for each service. or maybe i’m a big corp and i want to centralize IT auth flows

OAuth, however, is not designed for single sign on scenarios (please correct me if i’m wrong!)

saml

that’s what SAML addresses! Implementing SAML for a web app is similar to OAuth in that identity is being delegated the auth to a third party, but in this case that third party is a specific entity known as an “Identity Provider” (IDP).

the responsibility of an identity provider is to grant a user access to multiple cool web apps, not just the one COOL WEB APP. the way this works is that each of those web apps have a trust relationship with the identity provider. so first trust is established with the IDP and that trust and then used to gain access to the related services. that’s basically single sign on in a nutshell.

the user signs into a single service (the identity provider) and that service has a trust relationship with N services. The user can then log into N services (one of which is yours) all mediated through SAML. this is why i believe someone came up with the fancy $100 dollar phrase “federated identity”

there is no such thing as “non-relational” data

in data modeling discussions I often hear the phrase “non-relational data”. it’s usually someone making a case for why data should be in a NoSQL store and live denormalized. the argument is usually that the data itself is somehow inherently non-relational and so it should be put in a non-relational database.

in reality, most complex data have relationships and every conversation about storage systems is fundamentally about how to most effectively store and fetch that data. it’s a lower level implementation detail. so depending on the use cases and constraints, that data may be stored in a relational database or a non-relational database, but the data itself is not inherently “non-relational”.

for instance, lets say you’re storing user analytics data and each user has many devices. there’s a one to many between a user and their devices, but if the devices list is append-only and you’ll never have to mutate any individual device, maybe there’s no need to have a normalized representation between users and devices that requires you to pay processing time for reads which require JOINs.

{
  "_id": ObjectId("5f5a1f5e3dc1264c7ee03d5a"),
  "username": "john_doe",
  "email": "[email protected]",
  "devices": [
    {
      "device_id": "123456789",
      "device_name": "Smartphone",
      "os": "iOS"
    },
    {
      "device_id": "987654321",
      "device_name": "Laptop",
      "os": "Windows"
    },
    {
      "device_id": "567890123",
      "device_name": "Tablet",
      "os": "Android"
    }
  ]
}

still, I don’t think that makes the data non-relational – there’s a one to many relationship. what it does mean that there’s a processing performance trade-off if it’s stored in a normalized form (which you can usually do in a non-relational database nowadays).

how two phase locking prevents lost updates

two phase locking is an old method of ensuring serializable transactions using locks.

a common issue with non-serializable isolation (which is typically what most rdmses support…) in the fact of concurrent writes is the lost update problem.

here’s an example of a lost update, lets assume:

  • a a record in a database with column x that has a value of 5
  • there are two independent processes that will perform a read-modify-write on that record
  • the two processes read the current value at the same time. assuming readers don’t block readers, they’ll read the same value
  • process A will attempt to read the current value of 5 and increment it by 2
  • process B will attempt to read the current value of 5 and increment it by 5
  • process A writes first and the new value of x is 7
  • process B writes last and the new value of x is 10. The update from process A is considered lost / as if it never happened!

some databases will detect a write conflict like this and raise an error and others may just proceed without a hiccup. In either case, there’s a potential conflict that may need to be resolved depending on the expectations of a client. If this is a bank account, a lost update is a huge problem. If this is a view counter, maybe it’s not as big of a problem.

(conservative) two phase locking

there’s two types of locks in two phase locking (2PL): shared locks and exclusive locks. exclusive locks block both reads and writes. only shared locks can be shared. In snapshot isolation levels, exclusive locks from writers only block other writers – readers can still read records that are exclusively locked. with serializable, writers can block readers.

the reason 2PL is able to guarantee serializability is by splitting it locking behavior into two phases – one before actual query execution and the other after.

prior to actually executing the query, the query planner figures out what locks it needs to obtain. If a transaction only does a read, only a shared lock is acquired for that record. If, however, a transaction does a read of a record followed by a write to the same record, the shared lock is “upgraded” to an exclusive lock.

once all the locks are obtained (lock point), the locks are held until execution is complete. If you have two concurrent transactions, the first one with an exclusive lock to reach the lock point will block the other. This is what ensures serial execution.

and finally, the reason this is sometimes referred to as conservative 2PL is that it obtains all the locks upfront regardless of whether or not it needs to.

mongoDB read preferences

MongoDB read preferences give you control over read behavior when using replicasets. Writes in every environment go to primary, but reads can be configured to read from secondary or primary based on various criteria. In most versions of mongo, the read preference defaults to primary in the client but you should check your version for the default.

primary

Read from the primary. if the primary is down, the read will fail. This is useful if you need the latest successful write and have zero tolerance for stale data. The terms “latest” and “successful” are a bit of a sliding scale depending on the read concern. For example, with read concern local to a primary, it’s possible to read writes from other clients that have not been acknowledged by the primary node. In practical terms, this just means that the data could be lost in the event of a rollback event since it may not have been replicated to a majority of nodes.

Regardless of a write’s write concern, other clients using "local" or "available" read concern can see the result of a write operation before the write operation is acknowledged to the issuing client.

https://www.mongodb.com/docs/manual/core/read-isolation-consistency-recency/#read-uncommitted

But I thought mongo maintained a WAL using an on-disk journal? Why would data be rolled back at all?

Yes, the journal is used for data recovery in the event of a node shutdown. But when a primary node goes down, a new node is elected as the primary, and then the previous primary rejoins the cluster as a secondary it may have data / writes that are ahead of the new primary. Since the new primary is the source of truth, those writes need to be rolled back to be consistent with the new primary!

If you want data that has the highest durability level, use majority read concern. When read concern is majority, it will only return data that has been successfully acknowledged by a majority of nodes. However, this does sacrifice some level of data freshness because the latest write from a client may not have been replicated to the majority of nodes yet, so you may be reading an older value even though you’re reading from the primary.

Another factor to consider when choosing this setting is your primary nodes capacity and current load – if your primary is close to capacity on cpu or memory use, switching reads to primary may impact writes.

primaryPreferred

Read from the primary, but if the primary is unavailable, the read goes to a secondary. This setting is appropriate cases you want the most recent write, but you can tolerate occasional stale reads from a secondary. Replication lag can range pretty widely from single digit ms to minutes depending on the write volume, so keep in mind that during periods of high replication lag you’re more likely to be reading stale data.

secondary

All reads go to secondaries. Not all secondaries though – just one based on a server selection process. The server selection works like this:

First, there’s the default threshhold value which is 15ms. Now this isn’t the window used for server selection though – it’s only a part of it. The other part is the lowest average network round trip time (RTT) of secondary nodes. So if the closest node has an RTT of 100ms, the total set will be nodes that fall within 100 + 15 (115ms). It’ll use this time to randomly select a node to forward the request to.

This read preference can make sense if your system is very read heavy AND you can tolerate stale reads. Scaling out with many secondary nodes is one way to spread your read load and improve read availability. At the same time, it may also add latency to both your primary (due to the increased oplog syncing), increase inconsistency and likelihood of data staleness because there are now more nodes that are likely to be out of sync with writes, and potentially reduced write latency particularly for writes that require majority acknowledgement because you’ll need N / 2 + 1 acknowledgements.

secondaryPreferred

All reads go to secondaries, but if not available, read from the primary. The same concerns for “secondary” read preference applies here. The additional factor to consider here is whether your primary can afford to take on this additional load.

nearest

Doesn’t matter which, just pick the one with lowest latency based on the server selection algorithm I mentioned in the secondary read preference section.

why the words in “CAP theorem” are confusing

lets go through each one ok? my goal is to make you as confused as i am

consistency

the word “consistency” is super overloaded in the realm of distributed systems. In CAP, consistency means (informally) that every read reflects the most recent write. This is also known as single-copy consistency or strict / strong consistency. Data is replicated to multiple nodes in these systems, but any reads to this storage system should give the illusion that read and writes are actually going to the same node in a multi-user / concurrent environment. Put another way – there’s multiple copies of data in practice, but readers always see the latest (single) copy by some loose definition of “latest”.

unfortunately, “strong” consistency is a grey area with lots of informal definitions. Some distributed systems researchers such as Martin Kleppman equate CAP consistency with linearlizability, which depending on your definition of strong consistency, is either a weaker form of strong consistency or a more precise and formal definition of strong consistency. Does consistency mean that every write is instantaneously reflected on all nodes? Or, in the case of linearlizability, that even though reads may not reflect the latest write they will always reflect the correct ordering of writes based on some agreed upon global time?

data stores that support transactions are also characterized by their ACID properties (Atomic, Consistent, Isolated, Durable). ACID consistency has nothing to do with ensuring that readers get the latest write; ACID is about the preservation of invariants in the data itself (such as keeping debit or credit balanced) within a transaction, which is often the responsibility of the application itself rather than the storage system. Relational databases support specific forms of invariant enforcement through referential integrity, but many invariants cannot be easily enforced by the storage system. Are they related at all? Well, they’re related insofar as CAP consistency trade-offs affect ACID consistency for system designers, but they’re still completely different notions of consistency! For example, if it’s possible for a read to return stale data during a transaction (dirty reads), this can make it difficult to maintain invariants for the application.

availability

availability in CAP has very strict, binary definition. CAP availability states that every non-failing node (even if isolated from the rest of the system) can respond successfully to requests. This is confusing because most site operators talk about availability in terms of uptime and percentages. For example, consider this scenario of a system that currently serves most reads from secondary nodes. Lets assume that most secondary nodes in a system fails and can no longer serve requests. The server switches all reads to the primary and is still technically “available” from a user standpoint. This fails the CAP definition of availability, but passes the more practical operational definition of availability. One way to distinguish one from the other when they’re discussed at the same time is to refer to CAP availability as “big A” and practical availability / uptime as “little a”.

in practice, availability (“little a”) isn’t black and white (can EVERY non-failing node respond?). Most real world systems don’t even offer this strong guarantee. This definition might make sense in a theoretical system and make it easier to reason with more formally / mathematically, but in reality many storage systems allow users to choose between many different levels of availability and for different operations. For example, some databases allow users to tune availability by reads or writes using quorums. Systems that prioritize data durability in the face of failures may sacrifice some availability for writes. If writes cannot be propagated to a majority of nodes in the system, return an error. At the same time, it may also prioritize availability for reads and so perhaps any response from any replica is considered successful, even if that data might be stale. If you’re trying to reason about the availability of a storage system, these details matter and are far more relevant than whether or not the system is capable of achieving big A (CAP availability).

partition tolerance

partition tolerance is confusing on multiple levels.

the word “partition” itself is confusing because partitions in a distributed system are splits or shards of a data store that is maintained for purposes of scalability. In CAP, a partition refers to some network partition which is basically a network error that disrupts node to node communication. if a system remains operational in the presence of a network partition, then it is considered partition tolerant in the CAP model.

second, CAP appears to present partition tolerance as something a distributed system has or does not have. but because networks are fundamentally unreliable, system designers need to design systems to tolerate network errors. there’s not a really a choice or trade-off that should be made – it’s more of a requirement. unlike availability and consistency (which are actual trade-offs that need to be made when there is a network partition), partition tolerance is not something you can relax in practice. so designers are really choosing between consistency and availability, assuming that there will be network partitions. this is why some people say CAP is really just CP and AP. you’re either choose consistency with partition tolerance or availability with partition tolerance.

finally, partitions in the network are relatively rare events. two data centers or nodes within a data center may lose communication, but that’s rare compared to individual node failures (disk full, over-capacity, random crashes) that happen all the time given a large network. In the model of CAP, however, a single node failure isn’t a network failure and so it falls outside the stated rules of CAP. eric brewer has clarified though that in practice, nodes that fail to respond within some time window can be treated as conceptually equivalent to an actual partition event..