What is future of hardware and how will it impact the future of code? What are the most promising approaches to breaking free of Von Neumann bottleneck? <- is this still a thing?
I have heard Simon Peyton Jones say a couple of times that they tried to create new architectures for functional languages but they failed.
Would these architectures succeed today? Are the constraints(business, technical, community) different enough or will be differ enough in 10 years that they could succeed?
the memristor and non volatile RAM should change how we code, no longer need to think about state persistence as a separate step, the problem is that now we can't fix by rebooting 😄
I’m unrelentingly hyped for https://en.wikipedia.org/wiki/Field-programmable_gate_array and anything resembling graph-processing. Merging memory, storage, and computing into the same substrate is also exciting.
More than the specific possible hardware, my real hope is to remember computation is king, and hardware should be in support of the computation we’d like to do quickly. If you have a way of computing that may be awesome but is hard to reconcile with hardware, that’s a sign hardware should adapt to fit.
Intel has had long enough dictating the terms of programming, and we certainly don’t want to replace them with an ARMs race to the next monopoly.
“If hardware is unjust, change hardware!”
— stealing from a quote about naturalism
I was lead down this path by wondering if there are enough people writing python, for example, that a python interpreter compiled to hardware (a machine that only knows to how to interpret python 3.7 for example and optimised to do so) would be a great idea i.e makes sense from a unit cost perspective and efficient in other ways.
We probably don’t want to compile user code to hardware since we still want to be able to change code so quickly but things like the interpreter that are relatively fixed and are used by enough people maybe it makes sense.
Should we hardware be a more significant portion of the stack? Does the tradeoff make sense? Flexibility for power. (Just checked Wikipedia, yeah ASIC or FPGA that is just for python for example). Wonder if anyone has tried it and if it would be more widespread in future.
It looks like GPUs are the way we break free of the Von Neumann bottleneck. Desktop discrete GPUs have thousands of processing units. Every personal computer (desktop, laptop, mobile) now has a powerful GPU. Machine learning has made GPUs even more relevant. The future of code needs to take GPUs into account, and not ignore this powerful hardware. My project, Curv, lets you write simple, high level code that executes either on the CPU or on the GPU. Curv is a pure functional language because it's easier to compile pure functional code into massively parallel GPU code. I think that GPUs are viable as an architecture for pure functional programming. I do think that you need to design your pure functional language with GPUs in mind, and that wasn't a consideration for the design of Haskell. I'm not claiming that Haskell is the ideal GPU programming language.
If you have a way of computing that may be awesome but is hard to reconcile with hardware, that’s a sign hardware should adapt to fit.That hasn't worked out for the Haskell people:
I have heard Simon Peyton Jones say a couple of times that they tried to create new architectures for functional languages but they failed.
Doug Moen +1
I hope one day in the future we can design programming systems on top of computational constructs like concurrency, parallelism, uncertainty, etc. and have hardware like GPUs, CPUs or networked machines be chosen dynamically without human design interventions for specific architectures.
These are all technically challenging and won’t always succeed, but for it to have a chance we’ll need to change the direction of funding around, flowing from code requirements to hardware imperatives.
My criticism of Haskell is that ubiquitous lazy evaluation is not a good idea. This feature means that understanding and tuning the performance of a program is much harder in Haskell than in any other language. Debugging is also messed up: it doesn't really make sense to ask for a stack trace. Attempts to build high performance Haskell hardware have failed. All these facts are closely related. Haskell is somehow lacking in mechanical empathy.
Modern CPUs are designed to execute single threaded C programs as quickly as possible. There is a ton of silicon devoted to superscalar out-of-order speculative execution, etc. It isn't an efficient use of hardware.
My real hope is to remember > computation > is king, and hardware should be in support of the computation we’d like to do quickly.
Modern GPUs are designed to maximize the percentage of silicon that contains ALUs, relative to the amount of control logic. It is an efficient way to use silicon, if your goal is to maximize the amount of compute that can be accomplished with a given transistor budget.
So we start with this super powerful hardware with monster compute abilities, and we build languages that let us easily program it.
https://youtu.be/ubaX1Smg6pY recommended to me when I first joined the community by Garth Goldwater. That talk was very profound for me, it just keeps providing nuggets of wisdom! Thanks again Garth!
At one point in the presentation, Alan Kay talks about when he was at Xerox Park, they could just tell the Intel guys to adjust the microcode on the CPU's. He goes on to say that modern FPGAs are the closest thing we have to that in the modern era.
As usual, Alan Kay is way ahead of his time! I hope he is right.
My real hope is to remember > computation > is king, and hardware should be in support of the computation we’d like to do quickly.I disagree with this statement on some fundamental points. This statement implies that there is a fundamental adversarial hierarchy: Us Vs. Them; Software Vs. Hardware; One must submit to the other! This is a counterproductive view imo.
My ideal future of hardware will come from better symbiosis of computation and hardware. Not domination of one over the other.
Referring to the Alan Kay talk again, Hardware and Software design used to be much closer together. A Programmer and a Hardware engineer would sit down and solve problems together, fighting the tyranny of Physics together 😛.
My hope for the future is that this kind of symbiosis has a renaissance.
alan kay often cites Bob Barton as a huge inspiration for the hardware/software overlap, if anyone who knows more about hardware than me was looking for a productive name to google
I tend to be bearish on this whole idea of a "Von Neumann bottleneck".
There have been many, many attempts to create processors that operate on graphs. One of the more recent ones was by my research group in grad school 10 or so years ago: https://www.cs.utexas.edu/~trips. But see also Dataflow architectures (https://en.wikipedia.org/wiki/Dataflow_architecture), and MIT's Raw processor (http://groups.csail.mit.edu/cag/raw). While it's possible we're just one breakthrough away from making it work, it is an Open Research Problem.
Hardware people know what they're doing, far more than us software people. Whether that's because they're Superior Human Beings, or because their domain is easier, you seek to make hardware more like software at your peril. Imagine having to fix a security vulnerability in a chip instead of a compiler.
Hardware has less open source, and you seek to implement layers of software in hardware at your peril. Imagine having to convince your processor supplier of the value of open source. Imagine having to create a new programming language in nights and weekends in hardware. Now ask yourself why you would want to ask others to do something that hobbyists wouldn't do, to put up with a less malleable medium. The best ideas in computing have come from hobbyists. I want a more inclusive future, not less.
Python is Python only because all the hard stuff has native libraries that work with existing processors. If you plan to migrate those to pure Python, that gives up much of the benefit of using an existing language with a large install base. Might as well use a better language.
The term "von neumann bottleneck" is fundamentally about improving performance. Improving performance isn't in my top 10 major problems facing the future of software.
My ideal future of hardware will come from better symbiosis of computation and hardware. Not domination of one over the other.>
I realise how what I said may have come across, and want to clarify here I totally agree! I say computation in reference to the abstract notion of doing things with computers, wether through hardware or code. It was actually Alan Kay who inspired me to give that response though I forget from where.
Another way to put it might be to say that both hardware and software are things we invent to serve some computing goal, and we only sometimes know how they’ll help.
There’s a group at Stanford called the “Agile Hardware Group” (https://aha.stanford.edu/). I’m not involved, but my advisor + many people in my group are.
My 10k ft perspective: hardware tools suck. Like, orders of magnitude worse than any software stack.
- Production tools for hardware are literally Tcl/Perl scripts that generate strings of Verilog code and smash them together.
- Place&route (lowering a design into wires on a board) takes potentially hours to run. No notion of incremental compilation, small change -> hours to run.
- Verilog is a trash fire. It’s a programming language designed by hardware engineers, so it has insane, poorly specified semantics.
- Everything is closed source!! From circuit pieces to compilers, everything tool is considered valuable IP. It’s 1970s era computing, but today.
These issues need to be fixed for hardware to really take off beyond its current state.
The maker movement was poised to make hardware more accessible but seems to have fizzled out. I used to be so involved with it. Wonder what 2.0 would look like.
On the open source side, I remain cautiously interested in https://riscv.org
There is also the http://Libre-SOC.org project, which is also Open Hardware. They are building a system-on-a-chip that contains an integrated CPU and GPU and VPU, inspired by the CDC 6600 architecture. I gather that their architecture is more efficient than RISC-V, that programming the GPU is much simpler than a conventional GPU due to close integration with the CPU, but I don't follow the project closely.
Yeah, very bullish on open-source hardware. I'd like for it to get commoditized and lose leverage.
I'm also interested in attempts to route around pre-firmware backdoors: https://puri.sm/learn/avoiding-intel-amt; https://openpowerfoundation.org (latter via https://www.talospace.com/2019/11/talos-ii-and-talos-ii-lite-officially.html)