Last Sunday at Play @ Kamala Mills
Wassup people. Squarespace has made some major changes. The new blog templates are very attractive and hence moving to one of them. It's going to break some of my old blogs but valuable lessons have been learnt.
Hopefully this will look a lot better.
Also. Come new apps have come out recently, on e of them is an app called Prisma. It rarely works but when it does it makes pics like these.
It's available for both Android and iPhones
So, ever since I am back from London, I have attended 4 birthdays. 2 kids under 13 and 2 adults over 30. And what I saw was a marked difference in the two. The kids had a blast, the adults were occupied in making sure the guests were taken care of, the guests were eating appetisers, the guests were looked after. In short it was all about the guests. The person whose birthday it was, was kinda just there. I didn't see that person abandon all worries and just have a good time. Which eventually bring me to my point. Are we as adults have forgotten how to have fun? Are we so caught up in the social man made etiquettes that we have no idea how to let go? It certainly seems like it.
Nostalgia. Such a powerful feeling ya? The right catalyst can take you back in time to that exact and place and you remember everything associated with it.
For me it's always been my gadgets. Since my earliest Merkin phone to my first cellphone. Here are some of my favourites
The Merlin game phone. I remember the evening when my dad surprised me with this. Very basic, no graphical interface. This whole thing worked on lights that lit up the face of this phone game. Tic tac toe was my favourite .
Sinclair ZX Spectrum computer. Wrote my first likes of BASIC and PASCAL on his one. But anyone who owned this remembers this for the video games. It was a real test of patience. You had to connect a cassette player to this which would load a 200 kilo byte game over 10 minutes. First time was never a charm.
Do you remember mixed tapes? Used to go to the local 'music store' who by the way was the ORIGINAL pirate. Would transfer 30 songs over two sides of this tape. Anyone who valued good music would always go for the metal tape. What was metal in this?
Of course you know this. The Atari game console, the first popular game console in the world. My grandad got this from Japan. Built like a tank. Pacman, Asteroids and bruised hands. We all remember this ya?
Samurai. That is what the Nintendo NES was knows as. And you didn't buy this. You rented this. For a weekend. Called all your friends over and played till your eyes bled or your thumbs. Contra anyone?
And how can we forget our first video games. This was my favourite one. My first double screen game. I must have delivered over a million milk boxes. Donkey Kong anyone?
You all had this. But this was the first. Sony pioneered the mobile music industry. The first walkman was just awesome. I remember so many school trips with this one. Ever wondered where these things disappear?
And if you had to upgrade, this was the next logical choice. Even though the disc 'skipped' like crazy the high fidelity music made it worth the pain
Video game console in your hand. The first Gameboy was a weekend waster. Colour screen and superb games. Paperboy anyone?
My friends dad had this. Don't think he ever used it besides the first few days. Just like the Apple Watch hmmm?
My uncle had this. My first serious comouter. This is the one that got me hooked. Can write and write so much about this but this post needs to end before by Uber shows up
The first PDA. The future had arrived.
Finally. My first cell phone. Rest is history.
A ZFS developer’s analysis of the good and bad in Apple’s new APFS file system
Encryption options are great, but Apple's attitude on checksums is still funky.
Apple announced a new file system that will make its way into all of its OS variants (macOS, tvOS, iOS, watchOS) in the coming years. Media coverage to this point has been mostly breathless elongations of Apple's developer documentation. With a dearth of detail I decided to attend the presentation and Q&A with the APFS team at WWDC. Dominic Giampaolo and Eric Tamura, two members of the APFS team, gave an overview to a packed room; along with other members of the team, they patiently answered questions later in the day. With those data points and some first-hand usage I wanted to provide an overview and analysis both as a user of Apple-ecosystem products and as a long-time operating system and file system developer.
The overview is divided into several sections. I'd encourage you to jump around to topics of interest or skip right to the conclusion (or to the tweet summary). Highest praise goes to encryption; ire to data integrity.
APFS, the Apple File System, was itself started in 2014 with Giampaolo as its lead engineer. It's a stand-alone, from-scratch implementation (an earlier version of this post noted a dependency on Core Storage, but Giampaolo set me straight in this comment). I asked him about looking for inspiration in other modern file systems such as BSD's HAMMER, Linux's btrfs, or OpenZFS (Solaris, illumos, FreeBSD, Mac OS X, Ubuntu Linux, etc.), all of which have features similar to what APFS intends to deliver. (And note that Apple built a fairly complete port of ZFS, though Giampaolo was not apparently part of the group advocating for it.) Giampaolo explained that he was aware of them as a self-described file system guy (he built the file system in BeOS, unfairly relegated to obscurity when Apple opted to purchase NeXTSTEP instead), but didn't delve too deeply for fear, he said, of tainting himself.
Giampaolo praised the APFS testing team as being exemplary. This is absolutely critical. A common adage is that it takes a decade to mature a file system, and my experience with ZFS more or less confirms this. Apple will be delivering APFS broadly with 3-4 years of development so will need to accelerate quickly to maturity.
Paying down debt
HFS was introduced in 1985 when the Mac 512K (of memory! Holy smokes!) was Apple's flagship. HFS+, a significant iteration, shipped in 1998 on the G3 PowerMacs with 4GB hard drives. The typical storage capacity of a home computer has increased by a factor of over 1,000 since 1998 (and let’s not even talk about 1985).. HFS+ has been pulled in a bunch of competing directions with different forks for different devices (e.g. I've been told by inside sources that the iOS team created their own HFS variant, working so covertly that not even the Mac OS team knew) and different features (e.g. journaling, case sensitivity). It's old. It's a mess. And, critically, it's missing a bunch of features that are really considered basic costs of doing business for most operating systems. Wikipedia lists nanosecond timestamps, checksums, snapshots, and sparse file support among those missing features. Add to that the obvious gap of large device support and you've got a big chunk of the APFS feature list.
APFS first and foremost pays down the unsustainable technical debt that Apple has been carrying in HFS+. (In 2001 ZFS grew from a similar need where UFS had been evolved since 1977.) It unifies the multifarious forks. It introduces the expected features. In general it first brings the derelict building up to code.
Compression is an obvious common feature that's missing in the APFS feature list. It's conceptually quite easy, I told the development team (we had it in ZFS from the outset), so why not include it? To appeal to Giampaolo's BeOS nostalgia I even recalled my job interview with Be in 2000 when they talked about how compression actually improved overall performance since data I/O is far more expensive than computation (obvious now, but novel then). The Apple folks agreed, and—in typical Apple fashion—neither confirmed nor denied while strongly implying that it's definitely a feature we can expect in APFS. I'll be surprised if compression isn't included in its public launch.
Encryption is clearly a core feature of APFS. This comes from diverse requirements from the various devices; for example, multiple keys within file systems on the iPhone, or per-user keys on laptops. I heard the term "innovative" quite a bit at WWDC, but here the term is aptly applied to APFS. It supports several different encryption choices for a file system:
• Single-key for metadata and user data
• Multi-key with different choices for metadata, files, and even sections of a file ("extents")
Multi-key encryption is particularly relevant for portables where all data might be encrypted, but unlocking your phone provides access to an additional key and therefore additional data. Unfortunately this doesn't seem to be working in the first beta of macOS Sierra (specifying fileEncryption when creating a new volume with diskutil results in a file system that reports "Is Encrypted" as "No").
Related to encryption, I noticed an undocumented feature while playing around with diskutil (which prompts you for interactive confirmation of the destructive power of APFS unless this is added to the command-line: -IHaveBeenWarnedThatAPFSIsPreReleaseAndThatIMayLoseData; I'm not making this up). APFS (apparently) supports the ability to securely and instantaneously erase a file system with the "effaceable" option when creating a new volume in diskutil. This presumably builds a secret key that cannot be extracted from APFS and encrypts the file system with it. A secure erase then need only delete the key rather than needing to scramble and re-scramble the full disk to ensure total eradication. Various iOS docs refer to this capability requiring some specialized hardware; it will be interesting to see what the option means on macOS. Either way, let's not mention this to the FBI or NSA, agreed?
Snapshots and backup
APFS brings a much-desired file system feature: snapshots. A snapshot lets you freeze the state of a file system at a particular moment and continue to use and modify that file system while preserving the old data. It does so in a space-efficient fashion where, effectively, changes are tracked and only new data takes up additional space. This has the potential to be extremely valuable for backup by efficiently tracking the data that has changed since the last backup.
ZFS includes snapshots and serialization mechanisms that make it efficient to back up file systems or transfer file systems to a remote location. Will APFS work like that? Probably not, answered Giampaolo. ZFS sends all changed data, while Time Machine can have exclusion lists and the like. That seems surmountable, but we'll see what Apple does. APFS right now is incompatible with Time Machine due to the lack of directory hard links, a fairly disgusting implementation that likely contributes to Time Machine's questionable reliability. Hopefully APFS will create some efficient serialization for Time Machine backup.
While APFS dev manager Eric Tamura demonstrated snapshots at WWDC, the required utilities aren't included in the macOS Sierra beta. I used DTrace (technology I'm increasingly amazed that Apple ported from OpenSolaris) to find a tantalizingly named new system call fs_snapshot; I'll leave it to others to reverse engineer its proper use.
APFS brings another new feature known as "space sharing." A single APFS "container" that spans a device can have multiple "volumes" (file systems) within it. Apple contrasts this with the static allocation of disk space to support multiple HFS+ instances, which seems both specious and an uncommon use case. Both ZFS and btrfs have a similar concept of a shared pool of storage with nested file systems for administration and management.
Speaking with Giampaolo and other members of the APFS team, we discussed how volumes are the unit by which users can control things like snapshots and encryption. You'd want multiple volumes to correspond with different policies around those settings. For example, while you might want to snapshot and backup your system each day, the massive /private/var/vm/sleepimage (for saving memory when hibernating) should live on its own and not be backed up.
Space sharing is more like an operational detail than a game changing feature. You can think of it like special folders with snapshot and encryption controls—which is probably why Apple's marketing department has yet to make me a job offer. Adding new volumes can fail with an opaque error (does -69625 mean anything to you?), but using a larger disk image resolve the problem.
A modern trend in file systems has been to store data more efficiently to effectively increase the size of your device. Common approaches to this include compression (which, as noted above, is very very likely coming) and deduplication. Dedup finds common blocks and avoids storing them multiply. This is potentially highly beneficial for file servers where many users or many virtual machines might have copies of the same file; it's probably not useful for the single-user or few-user environments that Apple cares about. (Yes, they have server-ish offerings but their heart clearly isn't into it.) It's also furiously hard to do well, as I painfully learned while supporting ZFS.
Apple's sort-of-unique contribution to space efficiency is constant-time cloning of files and directories. As a quick aside, "files" in macOS are often really directories; it's a convenient lie they tell to allow logically related collections of files to be treated as indivisible units. Right-click an application and select "Show Package Contents" to see what I mean. Accordingly, I'm going to use the term "file" rather than "file or directory" in sympathy for the patient readers who have made it this far.
With APFS, if you copy a file within the same file system (or possibly the same container; more on this later), no data is actually duplicated. Instead, a constant amount of metadata is updated and the on-disk data is shared. Changes to either copy cause new space to be allocated (this is called "copy on write," or COW). btrfs also supports this and calls the feature "reflinks"—link by reference.
I haven't seen this offered in other file systems (btrfs excepted), and it clearly makes for a good demo, but it got me wondering about the use case. Copying files between devices (e.g. to a USB stick for sharing) still takes time proportional to the amount of data copied of course. Why would I want to copy a file locally? The common case I could think of is the layman's version control: "thesis," "thesis-backup," "thesis-old," "thesis-saving because I'm making edits while drunk."
There are basically three categories of files:
• Files that are fully overwritten each time; images, MS Office docs, videos, etc.
• Files that are appended to, like log files
• Files with a record-based structure, such as database files
For the average user, most files fall into that first category. So with APFS I can make a copy of my document and get the benefits of space sharing, but those benefits will be eradicated as soon as I save the new revision. Perhaps users of larger files have a greater need for this and have a better idea of how it might be used.
Personally, my only use case is taking a file—say time-shifted Game of Thrones episodes falling into the "fair use" section of copyright law—and sticking it in Dropbox. Currently I need to choose to make a copy or permanently move the file to my Dropbox folder. Clones would let me do this more easily. But then, so would hard links (a nearly ubiquitous file system feature that lets a file appear in multiple directories).
Clones open the door for potential confusion. While copying a file may take up no space, so too deleting a file may free no space. Imagine trying to free space on your system, and needing to hunt down the last clone of a large file to actually get your space back.
APFS engineers don't seem to have many use cases in mind; at WWDC they asked for suggestions from the assembled developers (the best I've heard is for copied virtual machines, which is not exactly a mass-market problem). If the focus is generic revision control, I'm surprised that Apple didn't shoot for a more elegant solution. One could imagine functionality with APFS that allows a user to enable per-file Time Machine—change tracking for any file. This would create a new type of file where each version is recorded transparently and automatically. You could navigate to previous versions, prune the history, or delete the whole pile of versions at once (with no stray clones to hunt down). In fact, Apple introduced something related 5 years ago, but I've literally never seen or heard of it until researching this post (show of hands if you've clicked "Browse All Versions…"). APFS could clean up its implementation, simplify its use, and bring generic support for all applications. None of this solves my Game of Thrones storage problem, but I'm not even sure it's much of a problem…
Side note: Finder copy creates space-efficient clones, but cp from the command line does not.
APFS claims to be optimized for flash. Flash memory (NAND) is the stuff in your speedy SSD. Apple changed the computing industry when it put flash into the iPod and iPhone, volumes for which fundamentally changed the economics of flash. This consumer change impacted the enterprise (as it often does), giving rise to hybrid and all-flash arrays. Ten years ago flash cost as much as DRAM; now it's challenging the economics of hard disks.
SSDs mimic the block interface of conventional hard drives, but the underlying technology is completely different. In particular, while magnetic media can read or write sectors arbitrarily, flash erases large chunks (blocks) and reads and writes smaller chunks (pages). The management is done by what's called the flash translation layer (FTL), software that makes blocks and pages appear more like a hard drive. (Editor's note: we've got a huge, 10,000 word breakdown on how SSDs work if you'd like to know more). An FTL is very similar to a file system, creating a virtual mapping (a translation) between block addresses and locations within the media. Apple controls the full stack, including the SSD, FTL, and file system; they could have built something differentiated, optimizing this components to work together. What APFS does, however, is simply write in patterns known to be more easily handled by NAND. It's a file system with flash-aware characteristics rather than one written explicitly for the native flash interfaces—more or less what you'd expect in 2016.
Also on the topic of flash, APFS includes TRIM support. TRIM is a command in the ATA protocol that allows a file system to indicate to an SSD (specifically to its FTL) that some space has been freed. SSDs require significant free space and perform better when there's more of it; they include more physical space than they advertise. For example, my 1TB SSD includes 1TB (240 = 10244) bytes of flash but only reports 931GB of available space, sneakily matching the storage industry's self-serving definition of 1TB (10004 = 1 trillion bytes). With more free space, FTLs can trade off space efficiency for performance and longevity. TRIM has become expected of file systems; it's unsurprising that APFS supports it. The problem with TRIM is that it's only useful when there's free space: it's something of a benchmark special. Once your disk is mostly full (as mine are in my laptop and phone basically at all times) TRIM doesn't do anything for you. I doubt that TRIM will bring any discernible benefit for APFS users beyond the placebo effect of feature parity.
APFS also focuses on latency: Apple's number one goal is to avoid the beach ball of doom. APFS addresses this with I/O QoS (quality of service) to prioritize accesses that are immediately visible to the user over background activity that doesn't have the same time-constraints. This is inarguably a benefit to users and a sophisticated file system capability.
Arguably the most important job of a file system is preserving data integrity. Here's my data, don't lose it, don't change it. If file systems could be trusted absolutely then the "only" reason for backup would be the idiot operators (i.e. you and me). There are a few mechanisms that file systems employ to keep data safe.
APFS makes no claims with regard to data redundancy. As Apple's Eric Tamura noted at WWDC, most Apple devices have a single storage device (i.e. one logical SSD) making RAID, for example, moot. Instead, redundancy comes from lower layers such as Apple RAID (apparently a thing), hardware RAID controllers, SANs, or even the "single" storage devices themselves.
As an aside, note that SSDs in most Apple products where APFS will run include multiple more-or-less independent NAND chips. High-end SSDs do implement data redundancy within the device, but it comes at the price of reduced capacity and performance. As noted above, the "flash-optimization" of APFS doesn't actually extend much below the surface of the standard block device interface, but the raw materials for innovation are there.
Also, APFS removes the most common way of a user achieving local data redundancy: copying files. A copied file in APFS actually creates a lightweight clone with no duplicated data. Corruption of the underlying device would mean that both "copies" were damaged, whereas with full copies localized data corruption would affect just one.
Computer systems can fail at any time—crashes, bugs, power outages, whatever—so file systems need to anticipate and recover from these scenarios. The old-old-old school method is to plod along and then have a special utility to check and repair the file system during boot (fsck, short for file system check). More modern systems labor to achieve an always-consistent format, or only narrow windows of inconsistency, obviating the need for the full, expensive fsck. ZFS, for example, builds up new state on disk and then atomically transitions from the previous state to the new one with a single atomic operation.
Overwriting data creates the most obvious opening for inconsistency. If the file system needs to overwrite several regions there is a window where some regions represent the new state and some represent the former state. Copy-on-write (COW) is a method to avoid this by always allocating new regions and then releasing old ones for reuse, rather than modifying data in-place. APFS claims to implement a "novel copy-on-write metadata scheme"; APFS lead developer Dominic Giampaolo emphasized the novelty of this approach without delving into the details. In conversation later, he made it clear that APFS does not employ the ZFS mechanism of copying all metadata above changed user data which allows for a single, atomic update of the file system structure.
It's surprising to see that APFS includes fsck_apfs—even after asking Giampaolo I'm not sure why it would be necessary. For comparison, I don't believe there's been an instance where fsck for ZFS would have found a problem that the file system itself didn't already know how to detect. But Giampaolo was just as confused about why ZFS would forego fsck, so perhaps it's just a matter of opinion.
Notably absent from the APFS intro talk was any mention of checksums. A checksum is a digest or summary of data used to detect (and correct) data errors. The story here is surprisingly nuanced. APFS checksums its own metadata, but not user data. The justification for checksumming metadata is strong: there's not much of it relative to user data (so the checksums don't consume much storage) and losing metadata can cast a potentially huge shadow of data loss. If, for example, metadata for a top-level directory is corrupted, then potentially all data on the disk could be rendered inaccessible. ZFS duplicates metadata (and triple-duplicates top-level metadata) for exactly this reason.
Explicitly not checksumming user data is a little more interesting. The APFS engineers I talked to cited strong ECC protection within Apple storage devices. Both NAND flash SSDs and magnetic media HDDs use redundant data to detect and correct errors. The Apple engineers contend that Apple devices basically don't return bogus data. NAND uses extra data, e.g. 128 bytes per 4KB page, so that errors can be corrected and detected. (For reference, ZFS uses a fixed size 32 byte checksum for blocks ranging from 512 bytes to megabytes. That's small by comparison, but bear in mind that the SSD's ECC is required for the expected analog variances within the media.) The devices have a bit error rate that's low enough to expect no errors over the device's lifetime. In addition there are other sources of device errors where a file system's redundant check could be invaluable. SSDs have a multitude of components, and in volume consumer products they rarely contain end-to-end ECC protection, leaving the possibility of data being corrupted in transit. Further, their complex firmware can (does) contain bugs that can result in data loss.
The Apple folks were quite interested in my experience with regard to bit rot (aging data silently losing integrity) and other device errors. I've seen many instances where devices raised no error but ZFS (correctly) detected corrupted data. Apple has some of the most stringent device qualification tests for its vendors; I trust that they really do procure the best components. Apple engineers I spoke with claimed that bit rot was not a problem for users of their devices, but if your software can't detect errors then you have no idea how your devices really perform in the field. ZFS has found data corruption on multi-million dollar storage arrays; I would be surprised if it didn't find errors coming from TLC (i.e. the cheapest) NAND chips in some of Apple's devices. Recall the (fairly) recent brouhaha regarding storage problems in the high-capacity iPhone 6. At least some of Apple's devices have been imperfect.
As someone who has data he cares about on a Mac, who has seen data lost from HFS, and who knows that even expensive, enterprise-grade equipment can lose data, I would gladly sacrifice 16 bytes per 4KB—less than 1% of my device's size.
As data ages you might occasionally want to check for bit rot. Likely fsck_apfs can accomplish this; though as noted there's no data redundancy and no checksums for user data, so scrub would only help to find problems and likely wouldn't help to correct them. And if it makes it any easier for Apple to reverse course, let's say it's for the el cheap-o drive I bought from Fry's not for the gold-plated device I got from Apple.
I'm not sure Apple absolutely had to replace HFS+, but likely they had passed an inflection point where continuing to maintain and evolve the 30+ year old software was more expensive than building something new. APFS is a product born of that assessment.
Based on what Apple has shown I'd surmise that its core design goals were:
• satisfying all consumers (laptop, phone, watch, etc.)
• encryption as a first-class citizen
• snapshots for modernized backup
Those are great goals that will benefit all Apple users, and based on the WWDC demos APFS seems to be on track (though the macOS Sierra beta isn't quite as far along).
In the process of implementing a new file system the APFS team has added some expected features. HFS was built when 400KB floppies ruled the Earth (recognized now only as the ubiquitous and anachronistic save icon). Any file system started in 2014 should of course consider huge devices, and SSDs—check and check. Copy-on-write (COW) snapshots are the norm; making the Duplicate command in the Finder faster wasn't much of a detour. The use case is unclear—it's a classic garbage can theory solution in search of a problem—but it doesn't hurt and it makes for a fun demo. The beach ball of doom earned its nickname; APFS was naturally built to avoid it.
There are some seemingly absent or ancillary design goals: performance, openness, and data integrity. Squeezing the most IOPS or throughput out of a device probably isn't critical on watchOS, and it's relevant only to a small percentage of macOS users. It will be interesting to see how APFS performs once it ships (measuring any earlier would only misinform the public and insult the APFS team).
The APFS development docs have a bullet on open source: "An open source implementation is not available at this time." I don't expect APFS to be open source at this time or any other, but prove me wrong, Apple. If APFS becomes world-class I'd love to see it in Linux and FreeBSD—maybe Microsoft would even jettison their ReFS experiment. My experience with OpenZFS has shown that open source accelerates that path to excellence. It's a shame that APFS lacks checksums for user data and doesn't provide for data redundancy. Data integrity should be job one for a file system, and I believe that's true for a watch or phone as much as it is for a server.
APFS will be an improvement at stability for Apple users of all kinds, on every device. There are some clear wins and some missed opportunities. Now that APFS has been shared with the world, the development team is probably listening. While Apple is clearly years past the decision to build from scratch rather than adopting existing modern technology, there's time to raise the priority of data integrity and openness. I'm impressed by Apple's goal of using APFS by default within 18 months. Regardless of how it goes, it will be an exciting transition.
Alexandra Anna Daddario is an American actress. She is known for playing Annabeth Chase in the Percy Jackson film series and Blake Gaines in San Andreas
Crispr appears to be the most important bio-tech breakthrough to come along in many years. The mainstream press had largely missed its importance and then—boom—this piece from Wired and Amy Maxmen appeared. The story tackles the tough work of describing in digestible terms how Crispr allows scientists to edit DNA. That’s the immediate payoff. But it also wraps all this science up in a gripping narrative of how the discovery came to be and what it might mean for people in the future. It’s one of those rare first swing, definitive type pieces and deserves everyone’s attention.
SPINY GRASS AND SCRAGGLY PINES creep amid the arts-and-crafts buildings of the Asilomar Conference Grounds, 100 acres of dune where California's Monterey Peninsula hammerheads into the Pacific. It's a rugged landscape, designed to inspire people to contemplate their evolving place on Earth. So it was natural that 140 scientists gathered here in 1975 for an unprecedented conference.
They were worried about what people called “recombinant DNA,” the manipulation of the source code of life. It had been just 22 years since James Watson, Francis Crick, and Rosalind Franklin described what DNA was—deoxyribonucleic acid, four different structures called bases stuck to a backbone of sugar and phosphate, in sequences thousands of bases long. DNA is what genes are made of, and genes are the basis of heredity.
Preeminent genetic researchers like David Baltimore, then at MIT, went to Asilomar to grapple with the implications of being able to decrypt and reorder genes. It was a God-like power—to plug genes from one living thing into another. Used wisely, it had the potential to save millions of lives. But the scientists also knew their creations might slip out of their control. They wanted to consider what ought to be off-limits.
By 1975, other fields of science—like physics—were subject to broad restrictions. Hardly anyone was allowed to work on atomic bombs, say. But biology was different. Biologists still let the winding road of research guide their steps. On occasion, regulatory bodies had acted retrospectively—after Nuremberg, Tuskegee, and the human radiation experiments, external enforcement entities had told biologists they weren't allowed to do that bad thing again. Asilomar, though, was about establishing prospective guidelines, a remarkably open and forward-thinking move.
At the end of the meeting, Baltimore and four other molecular biologists stayed up all night writing a consensus statement. They laid out ways to isolate potentially dangerous experiments and determined that cloning or otherwise messing with dangerous pathogens should be off-limits. A few attendees fretted about the idea of modifications of the human “germ line”—changes that would be passed on from one generation to the next—but most thought that was so far off as to be unrealistic. Engineering microbes was hard enough. The rules the Asilomar scientists hoped biology would follow didn't look much further ahead than ideas and proposals already on their desks.
Earlier this year, Baltimore joined 17 other researchers for another California conference, this one at the Carneros Inn in Napa Valley. “It was a feeling of déjà vu,” Baltimore says. There he was again, gathered with some of the smartest scientists on earth to talk about the implications of genome engineering.
The stakes, however, have changed. Everyone at the Napa meeting had access to a gene-editing technique called Crispr-Cas9. The first term is an acronym for “clustered regularly interspaced short palindromic repeats,” a description of the genetic basis of the method; Cas9 is the name of a protein that makes it work. Technical details aside, Crispr-Cas9 makes it easy, cheap, and fast to move genes around—any genes, in any living thing, from bacteria to people. “These are monumental moments in the history of biomedical research,” Baltimore says. “They don't happen every day.”
Using the three-year-old technique, researchers have already reversed mutations that cause blindness, stopped cancer cells from multiplying, and made cells impervious to the virus that causes AIDS. Agronomists have rendered wheat invulnerable to killer fungi like powdery mildew, hinting at engineered staple crops that can feed a population of 9 billion on an ever-warmer planet. Bioengineers have used Crispr to alter the DNA of yeast so that it consumes plant matter and excretes ethanol, promising an end to reliance on petrochemicals. Startups devoted to Crispr have launched. International pharmaceutical and agricultural companies have spun up Crispr R&D. Two of the most powerful universities in the US are engaged in a vicious war over the basic patent. Depending on what kind of person you are, Crispr makes you see a gleaming world of the future, a Nobel medallion, or dollar signs.
The technique is revolutionary, and like all revolutions, it's perilous. Crispr goes well beyond anything the Asilomar conference discussed. It could at last allow genetics researchers to conjure everything anyone has ever worried they would—designer babies, invasive mutants, species-specific bioweapons, and a dozen other apocalyptic sci-fi tropes. It brings with it all-new rules for the practice of research in the life sciences. But no one knows what the rules are—or who will be the first to break them.
IN A WAY, humans were genetic engineers long before anyone knew what a gene was. They could give living things new traits—sweeter kernels of corn, flatter bulldog faces—through selective breeding. But it took time, and it didn't always pan out. By the 1930s refining nature got faster. Scientists bombarded seeds and insect eggs with x-rays, causing mutations to scatter through genomes like shrapnel. If one of hundreds of irradiated plants or insects grew up with the traits scientists desired, they bred it and tossed the rest. That's where red grapefruits came from, and most barley for modern beer.
Genome modification has become less of a crapshoot. In 2002, molecular biologists learned to delete or replace specific genes using enzymes called zinc-finger nucleases; the next-generation technique used enzymes named TALENs.
Yet the procedures were expensive and complicated. They only worked on organisms whose molecular innards had been thoroughly dissected—like mice or fruit flies. Genome engineers went on the hunt for something better.
Scientists have used it to render wheat invulnerable to killer fungi. Such crops could feed billions of people.
As it happened, the people who found it weren't genome engineers at all. They were basic researchers, trying to unravel the origin of life by sequencing the genomes of ancient bacteria and microbes called Archaea (as in archaic), descendants of the first life on Earth. Deep amid the bases, the As, Ts, Gs, and Cs that made up those DNA sequences, microbiologists noticed recurring segments that were the same back to front and front to back—palindromes. The researchers didn't know what these segments did, but they knew they were weird. In a branding exercise only scientists could love, they named these clusters of repeating palindromes Crispr.
Then, in 2005, a microbiologist named Rodolphe Barrangou, working at a Danish food company called Danisco, spotted some of those same palindromic repeats in Streptococcus thermophilus, the bacteria that the company uses to make yogurt and cheese. Barrangou and his colleagues discovered that the unidentified stretches of DNA between Crispr's palindromes matched sequences from viruses that had infected their S. thermophilus colonies. Like most living things, bacteria get attacked by viruses—in this case they're called bacteriophages, or phages for short. Barrangou's team went on to show that the segments served an important role in the bacteria's defense against the phages, a sort of immunological memory. If a phage infected a microbe whose Crispr carried its fingerprint, the bacteria could recognize the phage and fight back. Barrangou and his colleagues realized they could save their company some money by selecting S. thermophilus species with Crispr sequences that resisted common dairy viruses.
As more researchers sequenced more bacteria, they found Crisprs again and again—half of all bacteria had them. Most Archaea did too. And even stranger, some of Crispr's sequences didn't encode the eventual manufacture of a protein, as is typical of a gene, but instead led to RNA—single-stranded genetic material. (DNA, of course, is double-stranded.)
That pointed to a new hypothesis. Most present-day animals and plants defend themselves against viruses with structures made out of RNA. So a few researchers started to wonder if Crispr was a primordial immune system. Among the people working on that idea was Jill Banfield, a geomicrobiologist at UC Berkeley, who had found Crispr sequences in microbes she collected from acidic, 110-degree water from the defunct Iron Mountain Mine in Shasta County, California. But to figure out if she was right, she needed help.
Luckily, one of the country's best-known RNA experts, a biochemist named Jennifer Doudna, worked on the other side of campus in an office with a view of the Bay and San Francisco's skyline. It certainly wasn't what Doudna had imagined for herself as a girl growing up on the Big Island of Hawaii. She simply liked math and chemistry—an affinity that took her to Harvard and then to a postdoc at the University of Colorado. That's where she made her initial important discoveries, revealing the three-dimensional structure of complex RNA molecules that could, like enzymes, catalyze chemical reactions.
The mine bacteria piqued Doudna's curiosity, but when Doudna pried Crispr apart, she didn't see anything to suggest the bacterial immune system was related to the one plants and animals use. Still, she thought the system might be adapted for diagnostic tests.
Banfield wasn't the only person to ask Doudna for help with a Crispr project. In 2011, Doudna was at an American Society for Microbiology meeting in San Juan, Puerto Rico, when an intense, dark-haired French scientist asked her if she wouldn't mind stepping outside the conference hall for a chat. This was Emmanuelle Charpentier, a microbiologist at Ume˚a University in Sweden.
As they wandered through the alleyways of old San Juan, Charpentier explained that one of Crispr's associated proteins, named Csn1, appeared to be extraordinary. It seemed to search for specific DNA sequences in viruses and cut them apart like a microscopic multitool. Charpentier asked Doudna to help her figure out how it worked. “Somehow the way she said it, I literally—I can almost feel it now—I had this chill down my back,” Doudna says. “When she said ‘the mysterious Csn1’ I just had this feeling, there is going to be something good here.”
Back in Sweden, Charpentier kept a colony of Streptococcus pyogenesin a biohazard chamber. Few people want S. pyogenes anywhere near them. It can cause strep throat and necrotizing fasciitis—flesh-eating disease. But it was the bug Charpentier worked with, and it was in S. pyogenes that she had found that mysterious yet mighty protein, now renamed Cas9. Charpentier swabbed her colony, purified its DNA, and FedExed a sample to Doudna.
Working together, Charpentier’s and Doudna’s teams found that Crispr made two short strands of RNA and that Cas9 latched onto them. The sequence of the RNA strands corresponded to stretches of viral DNA and could home in on those segments like a genetic GPS. And when the Crispr-Cas9 complex arrives at its destination, Cas9 does something almost magical: It changes shape, grasping the DNA and slicing it with a precise molecular scalpel.
Jennifer Doudna did early work on Crispr. PHOTO BY:BRYAN DERBALLA
Here’s what’s important: Once they’d taken that mechanism apart, Doudna’s postdoc, Martin Jinek, combined the two strands of RNA into one fragment—“guide RNA”—that Jinek could program. He could make guide RNA with whatever genetic letters he wanted; not just from viruses but from, as far as they could tell, anything. In test tubes, the combination of Jinek’s guide RNA and the Cas9 protein proved to be a programmable machine for DNA cutting. Compared to TALENs and zinc-finger nucleases, this was like trading in rusty scissors for a computer-controlled laser cutter. “I remember running into a few of my colleagues at Berkeley and saying we have this fantastic result, and I think it’s going to be really exciting for genome engineering. But I don’t think they quite got it,” Doudna says. “They kind of humored me, saying, ‘Oh, yeah, that’s nice.’”
On June 28, 2012, Doudna’s team published its results in Science. In the paper and in an earlier corresponding patent application, they suggest their technology could be a tool for genome engineering. It was elegant and cheap. A grad student could do it.
The finding got noticed. In the 10 years preceding 2012, 200 papers mentioned Crispr. By 2014 that number had more than tripled. Doudna and Charpentier were each recently awarded the $3 million 2015 Breakthrough Prize. Time magazine listed the duo among the 100 most influential people in the world. Nobody was just humoring Doudna anymore.
MOST WEDNESDAY AFTERNOONS, Feng Zhang, a molecular biologist at the Broad Institute of MIT and Harvard, scans the contents of Scienceas soon as they are posted online. In 2012, he was working with Crispr-Cas9 too. So when he saw Doudna and Charpentier's paper, did he think he'd been scooped? Not at all. “I didn't feel anything,” Zhang says. “Our goal was to do genome editing, and this paper didn't do it.” Doudna's team had cut DNA floating in a test tube, but to Zhang, if you weren't working with human cells, you were just screwing around.
That kind of seriousness is typical for Zhang. At 11, he moved from China to Des Moines, Iowa, with his parents, who are engineers—one computer, one electrical. When he was 16, he got an internship at the gene therapy research institute at Iowa Methodist hospital. By the time he graduated high school he'd won multiple science awards, including third place in the Intel Science Talent Search.
When Doudna talks about her career, she dwells on her mentors; Zhang lists his personal accomplishments, starting with those high school prizes. Doudna seems intuitive and has a hands-off management style. Zhang … pushes. We scheduled a video chat at 9:15 pm, and he warned me that we'd be talking data for a couple of hours. “Power-nap first,” he said.
If new genes that wipe out malaria also make mosquitoes go extinct, what will bats eat?
Zhang got his job at the Broad in 2011, when he was 29. Soon after starting there, he heard a speaker at a scientific advisory board meeting mention Crispr. “I was bored,” Zhang says, “so as the researcher spoke, I just Googled it.” Then he went to Miami for an epigenetics conference, but he hardly left his hotel room. Instead Zhang spent his time reading papers on Crispr and filling his notebook with sketches on ways to get Crispr and Cas9 into the human genome. “That was an extremely exciting weekend,” he says, smiling.
Just before Doudna's team published its discovery in Science, Zhang applied for a federal grant to study Crispr-Cas9 as a tool for genome editing. Doudna's publication shifted him into hyperspeed. He knew it would prompt others to test Crispr on genomes. And Zhang wanted to be first.
Even Doudna, for all of her equanimity, had rushed to report her finding, though she hadn't shown the system working in human cells. “Frankly, when you have a result that is exciting,” she says, “one does not wait to publish it.”
In January 2013, Zhang's team published a paper in Science showing how Crispr-Cas9 edits genes in human and mouse cells. In the same issue, Harvard geneticist George Church edited human cells with Crispr too. Doudna's team reported success in human cells that month as well, though Zhang is quick to assert that his approach cuts and repairs DNA better.
That detail matters because Zhang had asked the Broad Institute and MIT, where he holds a joint appointment, to file for a patent on his behalf. Doudna had filed her patent application—which was public information—seven months earlier. But the attorney filing for Zhang checked a box on the application marked “accelerate” and paid a fee, usually somewhere between $2,000 and $4,000. A series of emails followed between agents at the US Patent and Trademark Office and the Broad's patent attorneys, who argued that their claim was distinct.
A little more than a year after those human-cell papers came out, Doudna was on her way to work when she got an email telling her that Zhang, the Broad Institute, and MIT had indeed been awarded the patent on Crispr-Cas9 as a method to edit genomes. “I was quite surprised,” she says, “because we had filed our paperwork several months before he had.”
The Broad win started a firefight. The University of California amended Doudna's original claim to overlap Zhang's and sent the patent office a 114-page application for an interference proceeding—a hearing to determine who owns Crispr—this past April. In Europe, several parties are contesting Zhang's patent on the grounds that it lacks novelty. Zhang points to his grant application as proof that he independently came across the idea. He says he could have done what Doudna's team did in 2012, but he wanted to prove that Crispr worked within human cells. The USPTO may make its decision as soon as the end of the year.
- The stakes here are high. Any company that wants to work with anything other than microbes will have to license Zhang's patent; royalties could be worth billions of dollars, and the resulting products could be worth billions more. Just by way of example: In 1983 Columbia University scientists patented a method for introducing foreign DNA into cells, called cotransformation. By the time the patents expired in 2000, they had brought in $790 million in revenue.
It's a testament to Crispr's value that despite the uncertainty over ownership, companies based on the technique keep launching. In 2011 Doudna and a student founded a company, Caribou, based on earlier Crispr patents; the University of California offered Caribou an exclusive license on the patent Doudna expected to get. Caribou uses Crispr to create industrial and research materials, potentially enzymes in laundry detergent and laboratory reagents. To focus on disease—where the long-term financial gain of Crispr-Cas9 will undoubtedly lie—Caribou spun off another biotech company called Intellia Therapeutics and sublicensed the Crispr-Cas9 rights. Pharma giant Novartis has invested in both startups. In Switzerland, Charpentier cofounded Crispr Therapeutics. And in Cambridge, Massachusetts, Zhang, George Church, and several others founded Editas Medicine, based on licenses on the patent Zhang eventually received.
Thus far the four companies have raised at least $158 million in venture capital.
ANY GENE TYPICALLY has just a 50–50 chance of getting passed on. Either the offspring gets a copy from Mom or a copy from Dad. But in 1957 biologists found exceptions to that rule, genes that literally manipulated cell division and forced themselves into a larger number of offspring than chance alone would have allowed.
A decade ago, an evolutionary geneticist named Austin Burt proposed a sneaky way to use these “selfish genes.” He suggested tethering one to a separate gene—one that you wanted to propagate through an entire population. If it worked, you'd be able to drive the gene into every individual in a given area. Your gene of interest graduates from public transit to a limousine in a motorcade, speeding through a population in flagrant disregard of heredity's traffic laws. Burt suggested using this “gene drive” to alter mosquitoes that spread malaria, which kills around a million people every year. It's a good idea. In fact, other researchers are already using other methods to modify mosquitoes to resist the Plasmodium parasite that causes malaria and to be less fertile, reducing their numbers in the wild. But engineered mosquitoes are expensive. If researchers don't keep topping up the mutants, the normals soon recapture control of the ecosystem.
Push those modifications through with a gene drive and the normal mosquitoes wouldn't stand a chance. The problem is, inserting the gene drive into the mosquitoes was impossible. Until Crispr-Cas9 came along.
Today, behind a set of four locked and sealed doors in a lab at the Harvard School of Public Health, a special set of mosquito larvae of the African species Anopheles gambiae wriggle near the surface of shallow tubs of water. These aren't normal Anopheles, though. The lab is working on using Crispr to insert malaria-resistant gene drives into their genomes. It hasn't worked yet, but if it does … well, consider this from the mosquitoes' point of view. This project isn't about reengineering one of them. It's about reengineering them all.
Kevin Esvelt, the evolutionary engineer who initiated the project, knows how serious this work is. The basic process could wipe out any species. Scientists will have to study the mosquitoes for years to make sure that the gene drives can't be passed on to other species of mosquitoes. And they want to know what happens to bats and other insect-eating predators if the drives make mosquitoes extinct. “I am responsible for opening a can of worms when it comes to gene drives,” Esvelt says, “and that is why I try to ensure that scientists are taking precautions and showing themselves to be worthy of the public's trust—maybe we're not, but I want to do my damnedest to try.”
Esvelt talked all this over with his adviser—Church, who also worked with Zhang. Together they decided to publish their gene-drive idea before it was actually successful. They wanted to lay out their precautionary measures, way beyond five nested doors. Gene drive research, they wrote, should take place in locations where the species of study isn't native, making it less likely that escapees would take root. And they also proposed a way to turn the gene drive off when an engineered individual mated with a wild counterpart—a genetic sunset clause. Esvelt filed for a patent on Crispr gene drives, partly, he says, to block companies that might not take the same precautions.
Within a year, and without seeing Esvelt's papers, biologists at UC San Diego had used Crispr to insert gene drives into fruit flies—they called them “mutagenic chain reactions.” They had done their research in a chamber behind five doors, but the other precautions weren't there.Church said the San Diego researchers had gone “a step too far”—big talk from a scientist who says he plans to use Crispr to bring back an extinct woolly mammoth by deriving genes from frozen corpses and injecting them into elephant embryos. (Church says tinkering with one woolly mammoth is way less scary than messing with whole populations of rapidly reproducing insects. “I'm afraid of everything,” he says. “I encourage people to be as creative in thinking about the unintended consequences of their work as the intended.”)
Ethan Bier, who worked on the San Diego fly study, agrees that gene drives come with risks. But he points out that Esvelt's mosquitoes don't have the genetic barrier Esvelt himself advocates. (To be fair, that would defeat the purpose of a gene drive.) And the ecological barrier, he says, is nonsense. “In Boston you have hot and humid summers, so sure, tropical mosquitoes may not be native, but they can certainly survive,” Bier says. “If a pregnant female got out, she and her progeny could reproduce in a puddle, fly to ships in the Boston Harbor, and get on a boat to Brazil.”
These problems don't end with mosquitoes. One of Crispr's strengths is that it works on every living thing. That kind of power makes Doudna feel like she opened Pandora's box. Use Crispr to treat, say, Huntington's disease—a debilitating neurological disorder—in the womb, when an embryo is just a ball of cells? Perhaps. But the same method could also possibly alter less medically relevant genes, like the ones that make skin wrinkle. “We haven't had the time, as a community, to discuss the ethics and safety,” Doudna says, “and, frankly, whether there is any real clinical benefit of this versus other ways of dealing with genetic disease.”
Researchers in China announced they had used Crispr to edit human embryos.
That's why she convened the meeting in Napa. All the same problems of recombinant DNA that the Asilomar attendees tried to grapple with are still there—more pressing now than ever. And if the scientists don't figure out how to handle them, some other regulatory body might. Few researchers, Baltimore included, want to see Congress making laws about science. “Legislation is unforgiving,” he says. “Once you pass it, it is very hard to undo.”
In other words, if biologists don't start thinking about ethics, the taxpayers who fund their research might do the thinking for them.
All of that only matters if every scientist is on board. A month after the Napa conference, researchers at Sun Yat-sen University in Guangzhou, China, announced they had used Crispr to edit human embryos. Specifically they were looking to correct mutations in the gene that causes beta thalassemia, a disorder that interferes with a person's ability to make healthy red blood cells.
The work wasn't successful—Crispr, it turns out, didn't target genes as well in embryos as it does in isolated cells. The Chinese researchers tried to skirt the ethical implications of their work by using nonviable embryos, which is to say they could never have been brought to term. But the work attracted attention. A month later, the US National Academy of Sciences announced that it would create a set of recommendations for scientists, policymakers, and regulatory agencies on when, if ever, embryonic engineering might be permissible. Another National Academy report will focus on gene drives. Though those recommendations don't carry the weight of law, federal funding in part determines what science gets done, and agencies that fund research around the world often abide by the academy's guidelines.
THE TRUTH IS, most of what scientists want to do with Crispr is not controversial. For example, researchers once had no way to figure out why spiders have the same gene that determines the pattern of veins in the wings of flies. You could sequence the spider and see that the “wing gene” was in its genome, but all you’d know was that it certainly wasn’t designing wings. Now, with less than $100, an ordinary arachnologist can snip the wing gene out of a spider embryo and see what happens when that spider matures. If it’s obvious—maybe its claws fail to form—you’ve learned that the wing gene must have served a different purpose before insects branched off, evolutionarily, from the ancestor they shared with spiders. Pick your creature, pick your gene, and you can bet someone somewhere is giving it a go.
Academic and pharmaceutical company labs have begun to develop Crispr-based research tools, such as cancerous mice—perfect for testing new chemotherapies. A team at MIT, working with Zhang, used Crispr-Cas9 to create, in just weeks, mice that inevitably get liver cancer. That kind of thing used to take more than a year. Other groups are working on ways to test drugs on cells with single-gene variations to understand why the drugs work in some cases and fail in others. Zhang’s lab used the technique to learn which genetic variations make people resistant to a melanoma drug called Vemurafenib. The genes he identified may provide research targets for drug developers.
The real money is in human therapeutics. For example, labs are working on the genetics of so-called elite controllers, people who can be HIV-positive but never develop AIDS. Using Crispr, researchers can knock out a gene called CCR5, which makes a protein that helps usher HIV into cells. You’d essentially make someone an elite controller. Or you could use Crispr to target HIV directly; that begins to look a lot like a cure.
Or—and this idea is decades away from execution—you could figure out which genes make humans susceptible to HIV overall. Make sure they don’t serve other, more vital purposes, and then “fix” them in an embryo. It’d grow into a person immune to the virus.
But straight-out editing of a human embryo sets off all sorts of alarms, both in terms of ethics and legality. It contravenes the policies of the US National Institutes of Health, and in spirit at least runs counter to the United Nations’ Universal Declaration on the Human Genome and Human Rights. (Of course, when the US government said it wouldn’t fund research on human embryonic stem cells, private entities raised millions of dollars to do it themselves.) Engineered humans are a ways off—but nobody thinks they’re science fiction anymore.
Even if scientists never try to design a baby, the worries those Asilomar attendees had four decades ago now seem even more prescient. The world has changed. “Genome editing started with just a few big labs putting in lots of effort, trying something 1,000 times for one or two successes,” says Hank Greely, a bioethicist at Stanford. “Now it’s something that someone with a BS and a couple thousand dollars’ worth of equipment can do. What was impractical is now almost everyday. That’s a big deal.”
In 1975 no one was asking whether a genetically modified vegetable should be welcome in the produce aisle. No one was able to test the genes of an unborn baby, or sequence them all. Today swarms of investors are racing to bring genetically engineered creations to market. The idea of Crispr slides almost frictionlessly into modern culture.
In an odd reversal, it’s the scientists who are showing more fear than the civilians. When I ask Church for his most nightmarish Crispr scenario, he mutters something about weapons and then stops short. He says he hopes to take the specifics of the idea, whatever it is, to his grave. But thousands of other scientists are working on Crispr. Not all of them will be as cautious. “You can’t stop science from progressing,” Jinek says. “Science is what it is.” He’s right. Science gives people power. And power is unpredictable.
When the 2011 earthquake and tsunami struck Tohoku, Japan, Chris Goldfinger was two hundred miles away, in the city of Kashiwa, at an international meeting on seismology. As the shaking started, everyone in the room began to laugh. Earthquakes are common in Japan—that one was the third of the week—and the participants were, after all, at a seismology conference. Then everyone in the room checked the time.
Seismologists know that how long an earthquake lasts is a decent proxy for its magnitude. The 1989 earthquake in Loma Prieta, California, which killed sixty-three people and caused six billion dollars’ worth of damage, lasted about fifteen seconds and had a magnitude of 6.9. A thirty-second earthquake generally has a magnitude in the mid-sevens. A minute-long quake is in the high sevens, a two-minute quake has entered the eights, and a three-minute quake is in the high eights. By four minutes, an earthquake has hit magnitude 9.0.
When Goldfinger looked at his watch, it was quarter to three. The conference was wrapping up for the day. He was thinking about sushi. The speaker at the lectern was wondering if he should carry on with his talk. The earthquake was not particularly strong. Then it ticked past the sixty-second mark, making it longer than the others that week. The shaking intensified. The seats in the conference room were small plastic desks with wheels. Goldfinger, who is tall and solidly built, thought, No way am I crouching under one of those for cover. At a minute and a half, everyone in the room got up and went outside.
Last night played poker with the weekends. Won after a long time. But one more late night 💀💤
Today's plan is to read all day
Back to back partying and night outs are taking its toll on me. I need a whole week off sleeping and exercising. I am getting none of it at the moment.
Here are some pics from last weekend. It was Nikeeta's birthday at Farzi Cafe in Kamla Mills