It's been a while since I updated this blog, so I'm going to cross-post a literary thought from a private Facebook discussion:

Fahrenheit 451 has strong anti-feminine sentiments; the main characters are men, the only thoughtful woman is a woman-in-the-freezer, women are always used as the examples of societal airheaded-naivete, but criticising this seems to play into the book's own narratives of censorship:

"Now let's take up the minorities in our civilization, shall we? Bigger the population, the more minorities. Don't step on the toes of the dog-lovers, the cat-lovers, doctors, lawyers, merchants, chiefs, Mormons, Baptists, Unitarians, second-generation Chinese, Swedes, Italians, Germans, Texans, Brooklynites, Irishmen, people from Oregon or Mexico. The people in this book, this play, this TV serial are not meant to represent any actual painters, cartographers, mechanics anywhere. The bigger your market, Montag, the less you handle controversy, remember that! All the minor minor minorities with their navels to be kept clean. Authors, full of evil thoughts, lock up your typewriters. They did. Magazines became a nice blend of vanilla tapioca. Books, so the damned snobbish critics said, were dishwater. No wonder books stopped selling, the critics said. But the public, knowing what it wanted, spinning happily, let the comic-books survive."

In the postscript to my edition, Bradbury explicitly addresses this complaint (among others) noting that "there's more than one way to burn a book". He further recounts that, ironically, many students in the US have been given editions which are partially censored (mostly swearwords like 'damn' and 'hell').

Here's an interesting idea: the book is improved because because it simultaneously cries-out for criticism of it's gender-stereotypes (or even a re-write) while explicitly warning against the dangers of special-interest censorship. A progressive reading this book in the 21st century needs to wage, in their own mind, the societal-censorship battle that Bradbury describes.

Note: It was written in the 50s, which is an explanation if not an excuse.

Note 2: Women aren't a minority, but they are an interest group in the same style as the others and Bradbury explicitly addresses 'feministas' as a minority in the postscript of my edition.

## Thursday, 28 August 2014

### Fahrenheit 451 (a non-technical post)

Posted by
Daniel Rodgers-Pryor
at
9:21 pm
Labels:
Bradbury,
Fahrenheit 451,
feminism,
free speech,
literature

## Sunday, 9 February 2014

### Pascal's Triangle

While smoothing some evenly-spaced time series data a while ago, I found the need to generate rows from Pascal's triangle; for my own entertainment, I tried to come up with some minimal (but not totally unreadable) code:

Gaussian blurring/smoothing is just another instance of the general rule that 'Gaussians come up everywhere'. When dealing with discreet data, it's important to remember that the binomial distribution is the discrete equivalent of the Gaussian. The $n^{th}$ row of Pascal's triangle is a list of the binomial coefficients $\binom nk$. These are the weightings (up to a normalisation factor of $\frac{1}{2^n}$), describing how each raw data-point will contribute to it's neighbors:

$$x_i^{smoothed} = \sum\limits_{k = 0}^{n} \binom nk x_{i-\frac{n}{2} + k}$$

Each smoothed point is thus heavily influenced by its near neighbours, but not by its more distant ones. Note that $n$ determines the radius of the smoothing filter (in units of your data's sampling period).

This kind of clunky, discrete sum is just a special case of a convolution. Convolutions generalise the notion of smoothing to filtering, enabling a huge range of operations (notably Fourier Transforms) to be performed with these same kinds of discrete sums. Convolutions generalise to to higher dimensions (and continuous spaces) as well. In 2D, the weights or 'filter kernel' become a matrix; filtering is then sliding every pixel in the kernel matrix over each pixel in the data matrix. This gives a complexity of $O(n^2m^2)$ for an $m \times m$ kernel. Gaussians (binomials) have some special properties that can make these sums more efficient however: multidimensional Gaussians are separable. A 2D Gaussian is just the product of two orthogonal 1D Gaussians: $G(x, y) = G(x)G(y)$ [Note that the separated components of a 2D Gaussian might not align with the x and y axes of your data matrix. This is only a problem if the Gaussian is anisotropic, ie. if it has different widths in different directions].

This means that convolutions with Gaussian kernels can be done one dimension at a time; sliding each data-pixel over each pixel in two length $m$ binomial/Gaussian kernels, reducing the time complexity to $O(n^2m)$. This reduction extends to higher dimensions too - you can always break down a Gaussian into 1D components and convolute them sequentially.

def pascalRow(n): " Returns the nth row of Pascal's triangle" return [1] if n<=0 else reduce(lambda row, n: row[:-1] + [(row[-1] + n), n], pascalRow(n-1), [0])This is short and quite fast, but severely lacking in readability; a trait only made slightly excusable by it's modularity, and clarity of function. The question you may be asking though, is 'why Pascal's triangle in the first place'?

Gaussian blurring/smoothing is just another instance of the general rule that 'Gaussians come up everywhere'. When dealing with discreet data, it's important to remember that the binomial distribution is the discrete equivalent of the Gaussian. The $n^{th}$ row of Pascal's triangle is a list of the binomial coefficients $\binom nk$. These are the weightings (up to a normalisation factor of $\frac{1}{2^n}$), describing how each raw data-point will contribute to it's neighbors:

$$x_i^{smoothed} = \sum\limits_{k = 0}^{n} \binom nk x_{i-\frac{n}{2} + k}$$

Each smoothed point is thus heavily influenced by its near neighbours, but not by its more distant ones. Note that $n$ determines the radius of the smoothing filter (in units of your data's sampling period).

This kind of clunky, discrete sum is just a special case of a convolution. Convolutions generalise the notion of smoothing to filtering, enabling a huge range of operations (notably Fourier Transforms) to be performed with these same kinds of discrete sums. Convolutions generalise to to higher dimensions (and continuous spaces) as well. In 2D, the weights or 'filter kernel' become a matrix; filtering is then sliding every pixel in the kernel matrix over each pixel in the data matrix. This gives a complexity of $O(n^2m^2)$ for an $m \times m$ kernel. Gaussians (binomials) have some special properties that can make these sums more efficient however: multidimensional Gaussians are separable. A 2D Gaussian is just the product of two orthogonal 1D Gaussians: $G(x, y) = G(x)G(y)$ [Note that the separated components of a 2D Gaussian might not align with the x and y axes of your data matrix. This is only a problem if the Gaussian is anisotropic, ie. if it has different widths in different directions].

This means that convolutions with Gaussian kernels can be done one dimension at a time; sliding each data-pixel over each pixel in two length $m$ binomial/Gaussian kernels, reducing the time complexity to $O(n^2m)$. This reduction extends to higher dimensions too - you can always break down a Gaussian into 1D components and convolute them sequentially.

Posted by
Daniel Rodgers-Pryor
at
2:48 am
Labels:
binomial,
combinatorics,
convolution,
filtering,
gaussian,
kernel,
math,
maths,
python,
smoothing

Subscribe to:
Posts (Atom)

## Blog Archive

## About Me

- Daniel Rodgers-Pryor
- Melbourne, Victoria, Australia
- I'm hard at work on an MSc in Physics at the University of Melbourne, my research topic is 'building nano-scale neural networks'. In my (limited) spare time I tinker with 3D printing, electronics and programming.