Insomnia (by )

yawn

5:30am and I haven't slept a wink yet! I really need to sort out my lifestyle so I get (a) exercise and (b) time to think every day. Time to think is important for me; if I don't get enough, then when I go to bed, I lie there and think. Lots.

Tonights thoughts have included:

  1. Some ideas about how whole-program transformations (eg, the macroexpander/partial evaluator and the OO system) in CHROME might be handled. The OO system needs to be a whole-program transformation rather than just some macros using the normal macroexpander since things like inheritance graphs and method lists for generic functions need to be accumulated together; most Lisps handle that with macros that destructively update data structures, but I'm trying to design a system without rampant mutation, so need a whole-program code walk to do this. Clearly, since we want to be able to write macros that produce generic functions, methods, and the like, we need to do this AFTER normal macro expansion, but before the compiler gets its hands on it.
  2. Some ideas about separating the generic function/method system - the dispatch part of Lispy OO - from the classes-inheriting thing. Subtype relationships that are used to dispatch GFs should be doable with plain predicates - pair? my-record-type? etc. Or more complex predicate expressions on groups of arguments, so we can support multivariate typeclasses in the Haskell sense, as a rich interface/implementation system as well as a traditional records-with-single-inheritance class system. To do this properly we also need declarations that one predicate implies another - (number? x) -> (integer? x) - so that a method on numbers will be used for integers, yet a more specific integer method can override it. I'm not sure how decidable the "most specific method wins" thing can be with complex multivariate type predicates, though. Must experiment and ask knowledgeable formal logic folks.
  3. Thoughts about future computer architectures. The drive is for more speed, but these days, most CPUs are idle waiting for memory to feed them code and data, or (much more often) for the disk, network, or user to feed them. The only places where the CPU maxes out tend to be highly parallelisable tasks - server stuff handling lots of requests at once, games and other audiovisual things, and scientific number crunching. This suggests to me that a future direction of growth would be one or more high-bang-per-buck MISC processors embedded directly into SRAM chips (sort of like a CPU with an onboard cache... but the other way around, since the CPU would be much smaller than the SRAM array) which are bonded to the same chip carrier module as a set of DRAMs. One or more of CPU-and-SRAM and some DRAM chips are then all designed together as a tightly-coupled integrated unit for maximum speed due to short traces and the lack of standardised modular interfaces between them (like DIMMs and CPU socket standards) meaning that the interface can evolve rapidly. The whole CPU+SRAM+DRAM unit is then pluggable into a standardised socket, which motherboards will have several of. The result? Lots of cores of low power consumption reasonably fast CPU with high speed access to high speed memory. And for those demanding games/media/scientific tasks? The higher-end modules will have FPGAs on as well...
  4. Forget nasty complex unpredictable memory caches: have a nonuniform physical address space (with regions of memory of varying speed) and let the OS use the virtual memory system to allocate pages to regions based upon application hints and/or access statistics. Not having cache tag and management facilities makes for more chip area to put actual memory in.
  5. We've been wondering about getting goats lately. Goats are useful creatures; they produce milk (which can be turned into cheese) and they produce decent wool (just not in the quantities sheep produce it). Their milk and cheese don't make Sarah ill the way cow-derived versions do. Plus, we need something to come and graze our paddock. We've been doing a little bit of research and apparently two goats would be about right for the space we have. We'd need to put an inner layer of fence around the paddock to keep them in while still allowing the public footpath, and we'd need a little shed for them to shelter in. But thinking about setting things up in the paddock, I'm now wondering if it would be a good idea to build a duck run in there too, down at the bottom by the stream, all securely fenced against foxes and mink and with a duck-house up on stilts in case of flooding, but with a little pond dug out for them (connected to the stream by a channel with a grille over it to prevent escapage). It would be a convenient place to have the ducks, and it would make a good home for them, I think.

It's now 6am. Do I try and go to sleep, or try and last the day out? Hmmm...

1 Comment

  • By @ndy Macolleague, Wed 30th Apr 2008 @ 2:53 pm

    wrt 3: I think that there is already a trend towards this sort of thing in the HPC sector. There are already companies offering FPGA products that sit on the PCIe bus and possibly even the hypertransport but and act as streaming processors. It's a small jump to have a Dynamic DIMM where, between the CPU's write and read from a given address, the DIMM itself performs a calculation on the data.

    The main hurdle is integrating it with peoples' existing system: writing OS drivers that can drive these things and supplying people with the tools to write and customise their own algorithms. How does the ARGON architecture stand up in the face of running standardised code on conventional CPUs as well as FPGAs? Writing effective FPGA "compilers" is an interesting topic at the forefront of computer science research.

Other Links to this Post

RSS feed for comments on this post.

Leave a comment

WordPress Themes

Creative Commons Attribution-NonCommercial-ShareAlike 2.0 UK: England & Wales
Creative Commons Attribution-NonCommercial-ShareAlike 2.0 UK: England & Wales