NHacker Next
login
▲NIST Finalizes 'Lightweight Cryptography' Standard to Protect Small Devicesnist.gov
149 points by gnabgib 16 hours ago | 96 comments
Loading comments...
loeg 16 hours ago [-]
> If lightweight cryptography was a good idea, we’d just call it “cryptography.”

https://x.com/matthew_d_green/status/1948476801824485606

tptacek 16 hours ago [-]
I feel like you need to be read in, at least a little bit, into what's going on in cryptography research before you take statements like this at face value, because a lot of pretty serious people work on lightweight cryptography constructions and primitives.

(I'm not dissing Green here; I'm saying, I don't think he meant that statement to be proxied to a generalist audience as an effective summation of lightweight cryptography).

adastra22 14 hours ago [-]
There’s a very serious point here. Cryptographers are and always have been deeply concerned with performance. Some of the most skilled low level optimization people I know work on cryptography. It is only relatively recently that computer hardware has gotten powerful enough that cryptography isn’t a serious bottleneck for production systems. In all the recent new crypto standards, a big decision in the whittling down of the finalists was performance.

If someone is telling you that we need a new, faster standard for cryptography, and the selling point is “faster,” you’d better wonder why that wasn’t the standard already in use. If there is not some novel, brand new algorithm that is being employed, the answer is because it is insecure. Or at least that it doesn’t meet the level of security for general use, which took a cryptographer is approximately the same thing.

aseipp 10 hours ago [-]
Performance is an evolving target. Meta reported they spend ~0.05% (1 out of every 2000 CPU cycles) on X25519 key exchange within the last year, which is quite significant. If that can be brought down, that's worthwhile. And ongoing research and deployment of PQC will make key exchange even more expensive.

> If there is not some novel, brand new algorithm that is being employed, the answer is because it is insecure.

Lol that is just not true at all. A major point of discussion when NIST announced the SHA3 finalist being Keccak back in ~2012(?) was that BLAKE1 at the time offered significantly better software performance, which was considered an important practical reality, and was faster than SHA-2 at a higher (but insignificantly so) security margin; their own report admitted as much. The BLAKE1 family is still considered secure today, its internal HAIFA design is very close to existing known designs like Merkle–Damgård, it isn't some radically new thing.

So why did they pick Keccak? Because they figured that SHA-2 was plenty good and already deployed widely, so "SHA-2 but a little faster" was not as compelling as a standard that complimented it in hardware; they also liked Keccak's unique sponge design that was new and novel at the time and allowed AEAD, domain separation, etc. They admit ultimately any finalist including BLAKE would have been a good pick. You can go read all of this yourself. The Keccak team even has new work on more performant sponge-inspired designs, such as their work on Farfalle and deck functions.

The reality is that some standards are chosen for a bunch of reasons and performance is only one of them, though very important. But there's plenty of room for non-standard things, too.

AyyEye 8 hours ago [-]
> Meta reported they spend ~0.05% (1 out of every 2000 CPU cycles) on X25519 key exchange within the last year, which is quite significant.

That is not even remotely significant. Facebook spends 25% (1 out of every 4) of my CPU cycles on tracking. Pretty much anything else they optimize (are they still using python and php?) Will have a bigger impact.

ifwinterco 6 hours ago [-]
Or just reduce the quality of served photos and videos by 0.05%, probably nobody would notice
Telemakhos 1 hours ago [-]
> Facebook spends 25% (1 out of every 4) of my CPU cycles on tracking.

That's their core business.

fsflover 5 hours ago [-]
This is significant for Facebook, not for you. I solved this problem for myself by not using Facebook.
zarzavat 7 hours ago [-]
> A major point of discussion when NIST announced the SHA3 finalist being Keccak back in ~2012(?) was that BLAKE1 at the time offered significantly better software performance

IIRC, Keccak had a smaller chip area than Blake. Hardware performance is more important than software performance if the algorithm is likely to be implemented in hardware, which is a good assumption for a NIST standard. Of course, SHA3 hasn't taken off yet but that's more to do with how good SHA2 is.

> BLAKE1 family is still considered secure today, its internal HAIFA design is very close to existing known designs like Merkle–Damgård, it isn't some radically new thing.

Given that the purpose of the competition was to replace SHA2 if/when it is weakened, choosing a similar construction would not have been the right choice.

adrian_b 4 hours ago [-]
An additional argument for Keccak was that even if its performance in software implementations was mediocre, it allows very fast and cheap hardware implementations, so from this POV it was definitely better than the alternatives.
conradev 8 hours ago [-]

  If the selling point is “faster,” you’d better wonder why that wasn’t the standard already in use.
Because “fast enough” is fast enough, unless it isn’t.

My desktop CPU has AES in hardware, so it’s fast enough to just run AES.

My phone’s ARM CPU doesn’t have AES in hardware, so it’s not fast enough. ChaCha20 is fast enough, though, and especially with the SIMD support on most ARM processors.

All this paper is saying is that ChaCha20 is not fast enough for some devices, and so folks had to put in intellectual effort to make a new thing that is.

But even further: everyone’s definition for “fast enough” is different. Cycles per byte matters more if you encrypt a lot of bytes.

adrian_b 4 hours ago [-]
Only extremely old ARM CPUs (i.e. 32-bit CPUs from more than a decade ago) do not have AES hardware. All 64-bit ARM CPUs have it, like also at least the HW for SHA-1 and SHA-256. The more recent ARM CPUs have HW support for more cryptographic algorithms than the majority of the desktop CPUs.

"Lightweight" cryptography is not intended for something as powerful as a smartphone, but only for microcontrollers that are embedded in small appliances, e.g. sensors that transmit wirelessly the acquired data.

throw0101c 2 hours ago [-]
> Only extremely old ARM CPUs (i.e. 32-bit CPUs from more than a decade ago) do not have AES hardware.

I remember when Sun announced the UltraSPARC T2 in 2007 which had on-die hardware for (3)DES, AES, SHA, RSA, etc:

* https://en.wikipedia.org/wiki/UltraSPARC_T2

(It also had two 10 GigE modules right on the chip.)

wmf 13 hours ago [-]
If the selling point is “faster,” you’d better wonder why that wasn’t the standard already in use.

Because the field of cryptography advances? You could have made the same argument about Salsa/ChaCha but those are great ciphers. And now we have Ascon which has the same security level but I guess is even faster.

adgjlsfhk1 11 hours ago [-]
If these were faster than AES and as strong as AES, they would be replacing AES, not only being used for "lightweight devices unable to use AES"
tptacek 10 hours ago [-]
They're faster than AES on their target platforms. It really feels like people are just trying to run with this out-of-context Matthew Green quote as if it was an axiom.
TJSomething 8 hours ago [-]
Rijndael (now AES) wasn't even the strongest finalist in the 2001 AES evaluation. It partially won on dint of being faster on contemporary x86 processors than Serpent or Twofish. Nowadays, it's faster on x86-64 processors because there's dedicated silicon for running it. Modern small platforms don't have this silicon and have different performance characteristics to consider. Also, without that dedicated silicon, implementations tend to be vulnerable to side-channel attacks that were unknown at the time.
throw0101c 10 hours ago [-]
> If these were faster than AES and as strong as AES […]

Not everything needs to be as strong as AES, just "strong enough" for the purpose.

Heck, the IETF has published TLS cipher suites with zero encryption, "TLS 1.3 Authentication and Integrity-Only Cipher Suites":

* https://datatracker.ietf.org/doc/html/rfc9150

Lightweight cryptography could be a step between the above zero and the 'heavyweight' ciphers like AES.

thyristan 6 hours ago [-]
NULL ciphers in TLS are intended to enable downgrade attacks. Nothing else.

Same thing with weaker ciphers. They are a target to downgrade to, if an attacker wishes to break into your connection.

2 hours ago [-]
Dylan16807 4 hours ago [-]
> NULL ciphers in TLS are intended to enable downgrade attacks. Nothing else.

Intended... Do any experts think that? Can you cite a couple? Or direct evidence of course.

Unless I'm missing a joke.

thyristan 3 hours ago [-]
Thought this was common knowledge. When TLS1.3 was standardized, it explicitly left out all NULL and weak (such as RC4) ciphers. It also left out all weaker RSA/static-DH key exchange methods, such that easy decryption of recorded communication became impossible. To that the enterprises who would like to snoop on their employees and the secret services who would like to snoop on everyone reacted negatively and tried to introduce their usual backdoors such as NULL ciphers again:

https://www.nist.gov/news-events/news/2024/01/new-nccoe-guid... with associated HN discussion https://news.ycombinator.com/item?id=39849754

https://www.rfc-editor.org/rfc/rfc9150.html was the one reintroducing NULL ciphers into TLS1.3. RFC9150 is written by Cisco and ODVA who previously made a fortune with TLS interception/decryption/MitM gear, selling to enterprises as well as (most probably, Cisco has been a long-time bedmate of the US gov) spying governments. The RFC weakly claims "IoT" as the intended audience due to cipher overhead, however, that is extremely hard to believe. They still do SHA256 for integrity, they still do all the very complicated and expensive TLS dance, but then skip encryption and break half the protocol on the way (since stuff like TLS1.3 RTT needs confidentiality). So why do all the expensive TLS dance at all when you can just slap a cheaper HMAC on each message and be done? The only sensible reason is that you want to have something in TLS to downgrade to.

tptacek 21 minutes ago [-]
How exactly do NULL ciphers accomplish enterprise monitoring goals? The point of the TLS 1.3 handshake improvements was to eliminate simple escrowed key passive monitoring. You could have the old PKZip cipher defined as a TLS 1.3 ciphersuite; that doesn't mean a middlebox can get anybody to use it. Can you explain how this would get any enterprise any access it doesn't already have?
Dylan16807 1 hours ago [-]
Your first set of links seems to be about central key logging for monitoring connection contents? If there's stuff about null encryption in there I missed it. And even if there is, it all sounds like explicit device configuration, not something you can trigger with a downgrade attack.
thyristan 1 hours ago [-]
Yes, my first link is about that. It illustrates and explains the push to weaken TLS1.3 that has later been accomplished by the re-inclusion of NULL ciphers.

And all the earlier weaker ciphers were explicit device configuration as well. You could configure your webserver or client not to use them. But the problem is that there are easy accidental misconfigurations like "cipher-suite: ALL", well-intended misconfigurations like "we wan't to claim IoT support in marketing, so we need to enable IoT-'ciphers' by default!" and the sneaky underhanded versions of the aforementioned accidents. Proper design would actually just not create a product that can be mishandled, and early TLS1.3 had that property (at least with regards to cipher selection). Now it's back to "hope your config is sane" and "hope your vendor didn't screw up". Which is exactly what malicious people need to hide their intent and get in their decryption backdoors.

Dylan16807 49 minutes ago [-]
The first link is weakening in a way that is as far from a downgrade attack as you can possibly get. And on top of that TLS 1.3 has pretty good downgrade prevention as far as I know.

> well-intended misconfigurations like "we wan't to claim IoT support in marketing, so we need to enable IoT-'ciphers' by default!" and the sneaky underhanded versions of the aforementioned accidents

Maybe... This still feels like a thing that's only going to show up on local networks and you don't need attacks for local monitoring. Removing encryption across the Internet requires very special circumstances and also lets too many people in.

ifwinterco 6 hours ago [-]
Most modern processors have hardware support for AES, that's why it's fast. ChaCha is significantly faster when run on the CPU
yardstick 9 hours ago [-]
Security standards can move extremely slowly when the security of the incumbent algorithm hasn’t been sufficiently compromised, despite better (faster, smaller) alternatives.

Tech Politics comes into it.

stouset 10 hours ago [-]
I mean, they are faster and as strong and are gradually replacing it?
tptacek 10 hours ago [-]
Sponges generally? Maybe? LWC constructions not so much?
stouset 8 hours ago [-]
I thought the “they” being referenced were chacha/salsa.
tptacek 13 hours ago [-]
"If there is not some novel, brand new algorithm that is being employed, the answer is because it is insecure."

No, this is not at all true.

jona-f 8 hours ago [-]
This thread fails to mention that a cipher has to be somewhat hard to compute or someone with a lot of resources can't just brute force it. Therefore you also want an implementation of a given cipher to be as efficient as possible, so that no future improvement lowers the security of your cipher.
adrian_b 4 hours ago [-]
"Lightweight" cryptography is not intended for smartphones, personal computers and similarly powerful devices.

It is intended only for microcontrollers embedded in various systems, e.g. the microcontrollers from a car or from a robot that automate various low-level functions (not the general system control), or from various sensors or appliances.

It is expected that the data exchanged by such microcontrollers is valuable only if it can be deciphered in real time.

If an attacker would be able to decipher the recorded encrypted data by brute force after a month, or even after a week, it is expected that the data will be useless. Otherwise, standard cryptography must be used.

ignoramous 12 hours ago [-]
> computer hardware has gotten powerful enough that cryptography isn’t a serious bottleneck for production systems ... someone is telling you that we need a new, faster standard for cryptography, and the selling point is “faster"

Google needed faster than standard AES cryptography for File-based Encryption on Android Go (low cost) devices: https://tosc.iacr.org/index.php/ToSC/article/view/7360 / https://security.googleblog.com/2019/02/introducing-adiantum...

tptacek 12 hours ago [-]
This doesn't appear to use LWC constructions though, mostly ChaCha20.
15 hours ago [-]
corranh 8 hours ago [-]
That would be correct if the question was ‘For a $2000 32-core desktop should we switch to lightweight cryptography’.

The question is should we switch from some ridiculously insecure crappy crypto on a $3 part to this better lightweight crypto implementation.

Yah, we probably should, it’s better than what we had right?

rocqua 16 hours ago [-]
That feels too reductive.

The point of these primitives is not to trade security for ease of computation. The point is to find alternatives that are just as strong as AES-128 but with less computation. The trade-off is in how long and hard people have tried to break it.

tptacek 15 hours ago [-]
That's one of the tradeoffs but the most subtextual of them; the bigger tradeoff is that lightweight constructions are optimized for constrained environments, and outperform only on platforms that don't already do a good job with things like AES. That's the answer for why we wouldn't "just call it cryptography": things that perform well (1) on constrained platforms (2) for the kinds of messages exchanged by constrained platforms will probably tend to get their asses kicked by AES on non-constrained platforms.
mananaysiempre 2 hours ago [-]
This is why eSTREAM’s decision to leave MCU-class software essentially unaddressed puzzles me. What do I use on an 8-bit or 16-bit controller with no wide multiplies and no barrel shifter? Before this (which seems fine if not exactly tiny at first glance), there was... Gimli (with the same caveats)? Speck?

I’m guessing that the chip area of hardware AES is utterly inconsequential compared to all the peripherals you get on a modern micro, but the manufacturers are going to keep charging specialized-applications money for that until we’re all on 32-bit ARMs with multipliers and ChaCha becomes viable.

Taek 15 hours ago [-]
When you say "get their asses kicked", you mean in terms of performance right? Both sets of cryptography are secure under the same set of assumptions, it's just that one is more performant on limited instruction sets and the other is more performant on full featured instruction sets?
tptacek 13 hours ago [-]
I'm saying that AES isn't viable a decent-sized chunk of existing embedded hardware, and that the constructions that are viable on those platforms both (1) fall below the security thresholds of front-line mainstream constructions like AES-GCM or Chapoly and (2) would in fact be slower that AES or Chapoly on modern workstation, server, and phone platforms.

Hardware capabilities vary widely; there isn't one optimal algorithm that fits (or, in the case of MCUs, is even viable) on every platform.

What's worse, efforts to shoehorn front-line mainstream constructions onto MCUs often result in insecure implementations, because, especially without hardware support (like carryless multiplication instructions), it's very difficult to get viable performance without introducing side channels.

thmsths 15 hours ago [-]
I am in now way a crypto expert, just someone who enjoys casually reading about it. But in my mind even a security for speed trade off can be worthwhile. Otherwise the alternative is often: no crypto at all because it won't run on that tiny MCU and we don't have the budget for a hardware accelerator.
Taek 15 hours ago [-]
Even the tiniest MCU can typically perform more than one cryptographic operation per second. If your MCU has any cycles to spare at all it usually has enough cycles for cryptography.

If you don't have any cycles to spare, you can upgrade to an MCU that does have cycles to spare for less than $0.50 in small batches, and an even smaller price delta for larger batches.

Any device that doesn't use cryptography isn't using it because the manager has specifically de-prioritized it. If you can't afford the $0.50 per device, you probably can't afford the dev that knows his way around cryptography either.

Sanzig 13 hours ago [-]
> Even the tiniest MCU can typically perform more than one cryptographic operation per second. If your MCU has any cycles to spare at all it usually has enough cycles for cryptography.

Well, no. If you can do 1 AES block per second, that's a throughput of a blazing fast 16 bytes per second.

I know that's a pathological example, but I do understand your point - a typical workload on an MCU won't have to do much more than encrypt a few kilobytes per second for sending some telemetry back to a server. In that case, sure: ChaCha20-Poly1305 and your job is done.

However, what about streaming megabytes per second, such as an uncompressed video stream? In that case, lightweight crypto may start to make sense.

Taek 13 hours ago [-]
1 operation per second would refer to cryptographic signatures. If you are doing Chacha, the speeds are more like 1 mbps. AES is probably closer to 400 kbps.

An uncompressed video stream at 240p, 24 frames per second is 60 mbps, not really something an IoT device can handle. And if the video is compressed, decompression is going to be significantly more expensive than AES - adding encryption is not a meaningful computational overhead.

torgoguys 11 hours ago [-]
>Even the tiniest MCU can typically perform more than one cryptographic operation per second. If your MCU has any cycles to spare at all it usually has enough cycles for cryptography.

>1 operation per second would refer to cryptographic signatures. If you are doing Chacha, the speeds are more like 1 mbps. AES is probably closer to 400 kbps.

It sounds to me like you, sir or madame, have not worked with truly tiny MCUs. :-)

But yes, there are inexpensive MCUs where you can do quite a bit of crypto in software at decent speeds.

dylan604 11 hours ago [-]
Why would you compare an uncompressed video stream to anything in this discussion? Especially at such a small frame size in modern video usage.

Modern encrypted streaming uses pre-existing compressed video where the packets are encrypted on the way to you by the streaming server. It would have to uniquely encrypt the data being sent to every single user hitting that server. So it's not just a one and done type of thing. It is every bit of data for every user. So that scales to a lot of CPU on the server side to do the encryption. Yes, on the receiving side while your device is only dealing with the one single stream, more CPU cycles will be spent decompressing the video compared to decrypting. But again, that's only have of the encrypt/decrypt cycle

HeatrayEnjoyer 13 hours ago [-]
If it's compressed you don't need to decompress it first.
dylan604 11 hours ago [-]
on the receiving end.
yardstick 9 hours ago [-]
The device doing the decryption may not be the same device that does the decompression.

Eg a small edge gateway could be doing the VPN, while the end device is decoding the video.

tptacek 13 hours ago [-]
This whole framing is weird, because you can't spend $0.50 per already-deployed part to upgrade to something that can viably do AES.
13 hours ago [-]
wakawaka28 13 hours ago [-]
>If you don't have any cycles to spare, you can upgrade to an MCU that does have cycles to spare for less than $0.50 in small batches, and an even smaller price delta for larger batches.

The monetary cost is most likely not the problem. Tacking on significant additional work is bound to consume more power and generate heat. Tiny devices often have thermal and power limits to consider.

adgjlsfhk1 11 hours ago [-]
The problem is it isn't significant work. You can perform ChaCha8 on pen and paper in ~30 minutes. It's literally 768 single cycle 32 bit integer operations.
ebiederm 14 hours ago [-]
I am not up to speed on these new algorithms. I still remember there was a light weight cryptography algorithm a few years ago championed by the NSA that had a subtle (possibly deliberate) flaw in it.

When dealing with cryptography it is always necessary to remember cryptography is developed and operates in an adversarial environment.

Sanzig 13 hours ago [-]
Speck? To my knowledge there aren't any serious flaws despite a lot of public cryptanalysis. I think what sank Speck was that it came out a few years after after the Dual_EC_DRBG fiasco and nobody was ready to trust an NSA developed cipher yet - which is fair enough. The NSA burned their credibility for decades with Dual_EC_DRBG.
anfilt 10 hours ago [-]
Speck uses less resources to implement and is faster when I have tested it to compared ASCON.

I think the biggest problem is how they went about trying standardize it back in the day.

tptacek 13 hours ago [-]
I mean, yeah, but also Simon and Speck aren't as good as the new generation of low-footprint designs like Ascon and Xoodyak. We know more about how to do these things now than we did 15 years ago.
Sanzig 13 hours ago [-]
Makes sense! Also, how does Speck fare in power analysis side channel attacks vs Ascon? My understanding was that was also one of the NIST criteria.
tptacek 13 hours ago [-]
I am way out of my depth both on power consumption and leakage, but presumable Ascon does better on both counts than Chapoly.
adgjlsfhk1 10 hours ago [-]
Realy ChaCha seems trivially implementable without leaking anything.
cvwright 16 hours ago [-]
Good news. We’ve waited so long that AES is pretty damn lightweight now too.
Sanzig 13 hours ago [-]
Eh, the problem with AES is side channel attacks in software implementations. They aren't necessarily obvious, especially if you're deploying on an oddball CPU architecture.

This standard targets hardware without AES accelerators like microcontrollers. Now, realistically, ChaCha20-Poly1305 is probably a good fit for most of those use cases, but it's not a NIST standard so from the NIST perspective it doesn't apply. Ascon is also a fair bit faster than ChaCha20 on resource constrained hardware (1.5x-3x based on some benchmarks I found online).

stouset 10 hours ago [-]
Knowing how much of a dog SHA-3 is due to its sponge construction basis, it’s superficially surprising to me to see a sponge-based LWC algorithm.

I guess we’ve had quite a few years to improve things.

adrian_b 4 hours ago [-]
SHA-3 in fast when implemented in hardware.

Its slowness in software and quickness in hardware have almost nothing to do with it being sponge-based, but are caused by the Boolean functions executed by the Keccak algorithm, which are easy to implement in hardware, but need many instructions on most older CPUs (but much less instructions on Armv9 CPUs or AMD/Intel CPUs with AVX-512).

The sponge construction is not inherently slower than the Merkle–Damgård construction. One could reuse the functions iterated by SHA-512 or by SHA-256 and reorganize them to be used in a sponge-based algorithm, obtaining similar speeds with the standard algorithms.

That is not done because for the sponge construction it is better to design a mixing function with a single wider input instead of a mixing function with 2 narrower inputs, like for Merkle–Damgård. Therefore it is better to design the function from the beginning for being used inside a sponge construction, instead of trying to adapt functions intended for other purposes.

throw0101c 10 hours ago [-]
The IETF has published TLS cipher suites with zero cryptography, "TLS 1.3 Authentication and Integrity-Only Cipher Suites":

* https://datatracker.ietf.org/doc/html/rfc9150

Lightweight cryptography could be a step between the above zero and the 'heavyweight' ciphers like AES.

glitchc 11 hours ago [-]
This is facetious. Some protection is better than no protection.

If "every little bit helps" is true for the environment, it's also true for cryptography, and vice versa.

stouset 10 hours ago [-]
> Some protection is better than no protection.

No, not really.

Algorithms tend to fall pretty squarely in either the “prevent your sibling from reading your diary” or the “prevent the NSA and Mossad from reading your Internet traffic” camps.

Computers get faster every year, so a cipher with a very narrow safety margin will tend to become completely broken rapidly.

adrian_b 4 hours ago [-]
That classification has more steps.

Some things must be encrypted well enough so that even if NSA records them now, even 10 years or 20 years later they will not be able to decipher them.

Other things must be encrypted only well enough so that nobody will be able to decipher them close to real time. If the adversaries decipher them by brute force after a week, the data will become useless by that time.

Lightweight cryptography is for this latter use case.

theteapot 11 hours ago [-]
The environment? As in trees?
2OEH8eoCRo0 16 hours ago [-]
> We encourage the use of this new lightweight cryptography standard wherever resource constraints have hindered the adoption of cryptography
coppsilgold 16 hours ago [-]
They chose Ascon which is a good set of same sponge-based cryptographic functions if you don't have hardware acceleration for AES or the CPU resources for chacha20 which is the intention of the standard. The security is 128-bits (comparable to AES-128).

<https://ascon.isec.tugraz.at/specification.html>

skeezyboy 5 hours ago [-]
i too approve of their choice of cryptographic functions, if not a little miffed they didnt come to me first
14 hours ago [-]
rocqua 16 hours ago [-]
This is pretty cool. But IOT tends to fail hard on key agreement. And nothing here solves that. This seems to pretty much require a pre installed key, otherwise the overhead of securely installing a key would probably nullify the advantage of this encryption.
brohee 27 minutes ago [-]
The world that the algorithm targets is exactly where is this doable. MCUs typically have a protected OTP area that makes it a good place to inject keys right in the factory.
tptacek 15 hours ago [-]
Right, it's a competition to standardize authenticated encryption constructions, not entire cryptosystems.
LeGrosDadai 5 hours ago [-]
By the same token AES is useless as well, because it doesn't address key exchange. This was not the goal of this standardization process.
rocqua 4 hours ago [-]
My point was that AES and SHA are not the reason IOT cryptography is so often broken or missing. Instead its getting the keys onto the system in a halfway secure manner that is the blocking issue.

Hence I'd be a lot more enthusiastic about NIST guidance on these points.

dvdkon 3 hours ago [-]
A pairing system as seen in e.g. Zigbee or BLE seems pretty good to me. Not everyone cares to implement it well and there's still no standard for web-based devices, but it's here and it works.

I'd like to see more devices able to pair with NFC, but even that's standardised for Bluetooth, just underused.

ahoka 4 hours ago [-]
If these primitives are less resource intensive than what we use today with the same level of security, then why don't we just use these everywhere? If they are not as secure, then why would be use these anywhere?
saaspirant 9 hours ago [-]
I wonder whether this is backdoored by NSA as well
LeGrosDadai 5 hours ago [-]
There is a lot of cryptanalysis on it: https://eprint.iacr.org/search?q=ASCON Furthermore, it is not designed by NIST.
jeffrallen 8 hours ago [-]
Yup, NIST has no business putting their names on anything cryptographic anymore.
SV_BubbleTime 8 hours ago [-]
Did I miss something recent? Or are we going back to the DES days?
0xfffafaCrash 7 hours ago [-]
While not exactly recent, I think most people have Dual_EC_DRBG in mind these days when it comes to NIST and backdoors

https://en.wikipedia.org/wiki/Dual_EC_DRBG

skeezyboy 5 hours ago [-]
they really are shits arent they
6 hours ago [-]
dazhbog 4 hours ago [-]
Why not use chacha20 or xtea for embedded devices? They are lighter than AES..
le-mark 12 hours ago [-]
Wikipedia says Ascon has 320 bits of state and uses 5 bit s-boxes. That’s tiny compared to sha-256 or Blake2. One would think a pre image attack would be much more tractable at that scale.

https://en.m.wikipedia.org/wiki/Ascon_(cipher)

adrian_b 3 hours ago [-]
This says very little about the strength of the cipher.

The initial state of ChaCha20 also has only at most 320 unknown bits (512 bits - 128 constant bits - 64 bits of a nonce). Actually you normally also know the counter, so there are only 256 unknown bits.

Of course the actual strength of the cipher cannot exceed the size of the state, but the design strength must be much lower for this cipher. It competes with AES-128, which is designed for an 128-bit strength.

320 bits of state is more than enough for a cipher that must have an 128-bit strength, or even for a cipher designed for a 256-bit strength, like AES-256 or ChaCha20.

201984 10 hours ago [-]
No? SHA-256 has only eight 32-bit words of state which is even less than Ascon. BLAKE2s looks the same.

https://en.wikipedia.org/wiki/SHA-2

https://datatracker.ietf.org/doc/html/rfc7693

anfilt 11 hours ago [-]
While I know people where a little sceptical of it. I honestly liked the speck cipher that was published.

This cipher is a lot more heavy.

gnubee 7 hours ago [-]
A land shanty is just called a song. It's protective cryptography or not. Binary.
Onavo 15 hours ago [-]
Are these for garage doors and doorbells? Those devices could definitely use more security (it's not hard to stuff a proper TLS stack in a microcontroller but manufacturers balk at even putting something as cheap as a ESP32 in their BOM).
brohee 25 minutes ago [-]
You'd change the coin cell in an ESP32 powered garage door fob monthly... Unacceptable to most.
tptacek 15 hours ago [-]
Also for currently-constrained platforms for which hardware updates are infeasible, software updates aren't, and cryptographic security is currently compromised due to lack of availability of appropriate constructions.
indolering 9 hours ago [-]
Really glad a sponge function won, they are a big step forward in terms of crypto engineering!
jeffrallen 8 hours ago [-]
s/NIST/NSA/g