However, if the assumptions underlying that
initial learning scenario are no longer met, the results could be anything from useless to sinister, as Microsoft found out with last year's Tay chatbot experiment, where inflammatory inputs (courtesy of Twitter) produced a chatbot every bit as racist and sexist as it was technically accomplished.