# Automatic Detection of Toxic Questions

```Non-toxic questions: 1. What are the best rental property calculators?2. Apart from wealth, fame, and their tragic ends, what did Anthony Bourdain and Kate Spade have in common that might provide some insight into their choice to take their own lives?3. How do you find your true purpose or mission in life?4. What is the relation between space and time if they are connected? Are they converting continuously in each other?5. Is there an underlying message that can be read into the many multilateral agreement exits of the Trump administration during its first year and a half?____________________________________________________________________Toxic questions: 1. Lol no disrespect but I think you are ducking smart?2. Are Denmark and England destroyed by Muslim immigrants?3. How am I supposed to get a girlfriend if every woman thinks every man is a rapist?4. How many black friends does a white person need to make what they say 'not racist' on this basis?5. Are Russian women more beautiful than Ukrainian women?
```

## Preprocessing

```[math] 2 + 2 = 4 [\math] -> math
```
```"Wait... What is the relation between space and time if they are connected? Are they converting continuously in each other?" -> "Wait . . . What is the relation between space and time if they are connected ? Are they converting continuously in each other ?"
```
```for word, i in tqdm(word_index.items()):embedding_vector = embeddings_index.get(word)# 'ABcd' → 'abcd'if embedding_vector is None: embedding_vector = embeddings_index.get(word.lower())# 'denis' -> 'Denis' if embedding_vector is None: embedding_vector = embeddings_index.get(word.capitalize())# 'usa' -> 'USA'if embedding_vector is None: embedding_vector = embeddings_index.get(word.upper())# deling with numbers in Google News embedding if word.isdigit() and (embedding_vector is None):temp_word = len(word) * '#'embedding_vector = embeddings_index.get(temp_word)# '1123336548956552515151515151544444444' -> 'number'if word.isdigit() and (embedding_vector is None):embedding_vector = embeddings_index.get('number')if embedding_vector is not None: embedding_matrix.append(embedding_vector)in_vocab += 1else:non_in_vocab += 1 non_vocab.append(word)embedding_matrix.append(mean_vector)
```

## Models and Embeddings

````def get_model(embedding_matrix, nb_words, embedding_size=607):inp = Input(shape=(max_length,))x = Embedding(nb_words, embedding_size, weights= [embedding_matrix], trainable=False)(inp)x = SpatialDropout1D(0.3)(x)x1 = Bidirectional(CuDNNLSTM(256, return_sequences=True))(x)x2 = Bidirectional(CuDNNGRU(128, return_sequences=True))(x1)max_pool1 = GlobalMaxPooling1D()(x1)max_pool2 = GlobalMaxPooling1D()(x2)conc = Concatenate()([max_pool1, max_pool2])predictions = Dense(1, activation='sigmoid')(conc)model = Model(inputs=inp, outputs=predictions)adam = optimizers.Adam(lr=learning_rate)model.compile(optimizer=adam, loss='binary_crossentropy')return model`
```

## Snapshots Ensembling

During training, we are converging to and escaping from multiple local minima. A snapshot is taken in each local minimum

## Pseudo Labeling

Assigning weights to pseudo labels (first 1000 data points in the test set). The more confident our predictions are — the bigger weight we assign during training

## Optimal Threshold Selection

A lot of stuff did not work during this competition (picture by Schmitz)

## Data Augmentation and Test Time Augmentation (TTA)

```Non toxic: Is there an underlying message that can be read into the many multilateral agreement exits of the Trump administration during its first year and a half?Toxic: Lol no disrespect but I think you are ducking smart?____________________________________________________________________New toxic: Is there an underlying message that can be read into the many multilateral agreement exits of the Trump administration during its first year and a half? Lol no disrespect but I think you are ducking smart?
```