During the day in the mid-2000s I took classes in imperial history. On Friday and Saturday nights I descended to the basement of the student center at the University of Auckland to take part in an intense, desperate, and sometimes violent feud with five friends over control of the planet of Arrakis through Avalon Hill’s legendary strategy board game, Dune.
The board game was released in 1979, the same year as Edward Said’s Orientalism. These sessions extended long into the night (the game can take ten hours to complete) and both tested and forged friendships as we schemed with, tricked, and betrayed each other. At the time, I didn’t consider any connection between my history classes (or even discussions about Said with the same friends) and these nocturnal contests. In hindsight, though, the source material for the game, Frank Herbert’s 1965 novel, Dune, built on nineteenth- and twentieth-century imperial fantasies of knowledge, control, and power.
On the surface, the novel Dune fulfills a popular imperialist fantasy by granting its main character mastery over native “others” whose superstition and history makes them comprehensible and exploitable. However, it is also a book of schemes, assassination, betrayal, hidden motives, and unexpected consequences. Like the novel’s main antagonists, this fantasy ends stabbed and poisoned on the floor of a broken palace. In certain ways, Herbert’s embrace and subversion of orientalist tropes around knowledge even anticipated modern critiques of empire. [continue reading]
For generations, race studies scholars—historians and literary critics alike—believed that race and its pernicious spawn racism were modern-day phenomena only. This is because race was originally defined in biological terms, and believed to be determined by skin color, physiognomy, and genetic inheritance. The more astute, however, came to realize race could also be a matter of cultural classification, as Ann Stoler’s study of the colonial Dutch East Indies makes plain:
Race could never be a matter of physiology alone. Cultural competency in Dutch customs, a sense of ‘belonging’ in a Dutch cultural milieu…disaffiliation with things Javanese…domestic arrangements, parenting styles, and moral environment…were crucial to defining…who was to be considered European.*
Yet even after we recognized that people could be racialized through cultural and social criteria—that race could be a social construction—the European Middle Ages was still seen as outside the history of race (I speak only of the European Middle Ages because I’m a euromedievalist—it’s up to others to discuss race in Islamic, Jewish, Asian, African, and American premodernities). [continue reading]
Editor’s Note: In the weeks leading up to the new year, please help us remember 2018 at the Imperial & Global Forum by checking out the past year’s 10 most popular posts.
Catherine Baker University of Hull
Six years ago, in 2012, the dramatised arrival of the ‘Windrush Generation’ provided many British viewers with one of the most moving moments in the opening ceremony of the London Olympic Games. The dozens of black Londoners and the giant model of the Empire Windrush, which had docked at Tilbury in June 1948, entering the stadium during the ceremony’s historical pageant stood for the hundreds of thousands of black Britons who had migrated from the Caribbean to Britain, which was then still their imperial metropole, between 1948 and 1962.
The moment when the ‘Windrush Generation’ joined the pageant’s chaotic whirl of characters drawn from modern British social and cultural history symbolised, for millions of its viewers (if not those people of colour with more reason to be suspicious of British promises), a Britain finally inclusive enough to have made the post-Windrush black presence as integral a part of its national story as Remembrance or Brunel. Today, however, members of this same symbolic generation have been threatened with deportation – and some have already been deported – because they have been unable to prove their immigration status despite living in Britain for more than fifty years. The Daily Mirror’s Brian Reade was far from alone in wondering where it had all gone wrong since 2012. [continue reading]
US Economic Imperialism within a British World System
Historians have been busy chipping away at the myth of the exceptional American Empire, usually with an eye towards the British Empire. Most comparative studies of the two empires, however, focus on the pre-1945 British Empire and the post-1945 American Empire.[i] Why this tendency to avoid contemporaneous studies of the two empires? Perhaps because such a study would yield more differences than it would similarities, particularly when examining the imperial trade policies of the two empires from the mid-nineteenth to mid-twentieth century.
For those imperial histories that have attempted such a side-by-side comparison, the so-called Open Door Empire of the United States is depicted as having copied the free-trade imperial policies of its estranged motherland by the turn of the century; these imitative policies reached new Anglo-Saxonist heights following US colonial acquisitions in the Caribbean and the Pacific from the Spanish Empire in 1898, followed closely by the fin-de-siècle establishment of the Anglo-American ‘Great Rapprochement’.[ii]
Gallagher and Robinson’s 1953 ‘imperialism of free trade’ thesis—which explored the informal British Empire that arose following Britain’s unilateral adoption (and at times coercive international implementation) of free-trade policies from the late 1840s to the early 1930s—has played a particularly crucial theoretical role in shaping the historiography of the American Empire. In The Tragedy of American Diplomacy(1959), William Appleman Williams provided the first iteration of the imitative open-door imperial thesis, wherein he explicitly used the ‘imperialism of free trade’ theory in order to uncover an American informal empire. ‘The Open Door Policy’, Williams asserted, ‘was America’s version of the liberal policy of informal empire or free-trade imperialism’.[iii] The influence of Williams’s provocative thesis led to the creation of the most influential school of US imperial history—the ‘Wisconsin School’—which would continue in its quest to unearth American open-door or free-trade imperialism for decades to come.[iv] As a result, the contrasting ways in which the American Empire grew in the shadow of the British Empire have largely remained hidden. [continue reading]
The gap between the Cold War’s history and its new historiography spanned only about a decade and a half. The Cold War concluded during the George H.W. Bush presidency, but for the field we now call “the US and the world,” the Cold War paradigm reached its terminus, if we have to be specific, in 2005. That year saw the publication of two books that together marked a milestone in how scholars would write about the Cold War. John Lewis Gaddis’ The Cold War: A New History told its story through engaging prose and a top-down approach that gave pride of place to Washington and Moscow as the centers of a bifurcated world. For its part, Odd Arne Westad’s The Global Cold War: Third World Interventions and the Making of Our Timesoffered a triangular model in which empires of liberty and of justice interacted with Third World revolutionaries who led campaigns for decolonization that shifted into high gear after World War II. Gaddis’ survey represented a culmination of the traditional two-camps schema which tended to reflect self-understandings of the US government but which, after Westad’s concurrent synthesis, could no longer stand without qualification, without reference to the colonial dimension of the Cold War itself. In this sense, 2005 was a before-and-after historiographical event.
The classic Cold War concept, in which the governing and formal decolonization of Western Europe’s empires was one thing, and the rivalry between the superpowers something altogether else, has become diminished, but not because of one book alone. Various social movements have rejected the tenets of the Cold War at different times, and as far back as 1972, historians Joyce and Gabriel Kolko argued that “The so-called Cold War…was far less the confrontation of the United States with Russia than America’s expansion into the entire world.” In 2000, Matthew Connelly called attention to the distortions accompanying attempts to have postwar history fitted to the constraints of the Cold War paradigm. The “Cold War lens,” as Connelly memorably called it, had obscured racial and religious realities. As more scholars began to push the weight of culture, decolonization, gender, public opinion, and more against the Cold War paradigm’s once stable conceptual walls, the foundations faltered. And since Westad’s 2005 landmark, a notable tendency has developed across the disciplines in which scholars – notably Mark Philip Bradley, Jodi Kim, Heonik Kwon, and the authors (including Westad) contributing to Joel Isaac and Duncan Bell’s volume on the Cold War idea – have further troubled the notion that what followed World War II is best understood by focusing on how the leaders of the US and USSR saw the world. [continue reading]
It’s been a year now since Christopher Nolan’s film Dunkirkwas released to critical acclaim, public approval and criticism. Much of the criticism arose because the film omitted any mention of the Commonwealth troops who were in the British Expeditionary Force (BEF) and at Dunkirk. It felt like a missed opportunity to correct an anomaly in the collective memory of Britain and the world: to remember the mule drivers of the Royal Indian Army Service Corps (RIASC) who were also on those beaches.
So here’s the missing piece of the story, derived from my research into Dunkirk’s Indian soldiers.
On May 29, 1940, in the middle of the evacuation of Dunkirk, with thousands of British soldiers lined up on the beaches east of the French town, with a giant pall of smoke from the burning oil refinery, with regular sorties by Luftwaffe planes scattering the queues, and with ships large and small taking men off the beaches, Major Mohammed Akbar Khan of the RIASC marched four miles along the beach at the head of 312 Muslim Indians, en route from Punjab to Pirbright.
Five hundred years in the future, humanity has left earth and expanded into a new solar system. New planets have been terraformed and colonised. Life at the centre of this system is luxurious, sophisticated, civilised. On the outer fringes, existence is more precarious, eking out a living a more dangerous game. This is the world of Joss Whedon’s regrettably short-lived television series Firefly (2002) and its feature film follow-up Serenity (2005). Both follow the rag-tag crew of the spaceship Serenity, led by captain Mal Reynolds (Nathan Fillion), as they struggle to make ends meet by means legal and otherwise on the rough outer edges of this fictional universe, known in the show’s jargon as “the Verse.”
The Verse is cast in the mode of not one but two genres—the space opera and the western—that dramatise life on the frontier, and much of its humour and interest lies in the productive tension between their respective visions of that setting. According to Whedon, Firefly’s genesis lay in his reading of The Killer Angels (1974), Michael Shaara’s historical novel about the Battle of Gettysburg. He afterwards became “obsessed with the idea of life on the frontier, and that of course [made him] think of the Millennium Falcon.” In imagining the space opera as an adapted western that shifted nineteenth-century imperial tropes into an extraterrestrial future, Whedon was merely making explicit long-standing undercurrents within the genre. (Gene Roddenberry’s working title for Star Trek—a constant intertextual counterpoint in Firefly—had been Wagon Train to the Stars. Its trademark incantation of “space, the final frontier” was not incidental.) [continue reading]