Welcome to What the Tech?!, Refinery29's weekly column explaining the basics behind a buzzword or concept you've heard tossed around in conversation (but maybe don't actually understand). At this rate, artificial intelligence might just be the buzziest buzzword of 2016. It drums up pop culture images of brainiac computers and deadly terminators bent on humanity's destruction. But at its most essential, AI is decidedly less Hollywood. When you hear the phrase used today, AI usually references a machine that, thanks to its coding, can learn and evolve based on its experiences. Hopefully, that evolution ends up making the machine more accurate and more useful...but that's not always the case. There are varying degrees of intelligence among AIs in the same way there are different types of intelligences among living things. A digital assistant, such as Cortana or Google Now, learns from your interactions and behaviors, so it presents you with information (weather, traffic, and news, for example) that seem most relevant based on your location, commuting behaviors, and past clicks. Google's AlphaGo robot, programmed with expert knowledge of Go, a popular chess-like Asian game, can use that information to successfully play the game. Other AIs are more simple, recognizing and enhancing a portion of an image (and then doing it over and over again until it becomes a veritable work of art). Still others use natural language processing to improve their understanding of human speech and converse. An AI improves through what's called "machine learning," which is now used in a ton of different industries and applications. It's especially good at sorting through large sets of data and identifying patterns or anomalies — a task that would take a human ages to review. Machine learning is different from automation, however. With automation, a machine does the same task over and over again; with machine learning, while repetition is involved, so is decision making. Could an AI take over the world? At this point, that's very unlikely, although it is possible that an AI gone awry could seriously mess up a digital system it controlled. For now, it's more human error, not robotic malice, that we need to worry about.