A film analysis and critique essay invites you to respond to a film and will likely be your first exposure to the film that will drive your research projects for the rest of the semester. Your task as analyst and critic is to isolate one theme (a message or lesson) from the movie and discuss how the film explores and portrays that theme. You will analyze how the creator uses the parts of the film (such as dialogue, characterization, setting, imagery, etc.) in conjunction with rhetorical strategies (ethos, pathos, logos, and Kairos) to deliver their message. Once you have completed your analysis, you will then comment on the effectiveness of the film; did it do a good job portraying the message (theme) of the creator? What was good about the film? What could have been improved, removed, or added to enhance the film? When you write your critique, do not use the first person; maintain the third person voice but be subjective.
For this assignment, you must choose one of the approved films listed above to watch outside of class and to write your essay on. Talk to your classmates—if several of you want to write about the same film, try planning a ‘movie night’ to watch together.
As you watch the film, make sure you take careful notes. One time viewing it will probably not be enough for a thorough analysis and critique; you may have to stop and re-watch part or all of the film. You may also wish to remind yourself of certain parts of the film by reading about it on Wikipedia or imdb.com; if you use outside sources, be sure to cite and document them correctly in your essay.
Your essay will provide a brief summary of the film [at most one paragraph], have a solid thesis statement arguing how the creator delivers their message (theme), support the thesis with evidence drawn from the film itself, provide a thoughtful critique of the film, and ruminate on how the film is significant. Why is this film important to society?
The OWL at Purdue University says that when you write an analysis “you are essentially making an argument. You are arguing that your perspective—an interpretation, an evaluative judgment, or a critical evaluation—is a valid one.” While writing, be mindful of the roles the basic rhetorical situations of purpose, audience, genre, and stance play in your essay.
A flicker into the future, and all wrongdoing is predicted. The “precogs” inside the Precrime Division utilize their prescient capacity to capture suspects preceding any damage. In spite of the fact that, Philip K. Dick’s epic, “Minority Report,” may appear to be implausible, comparable frameworks exist. One of which is Bruce Bueno de Mesquita’s Policon, a PC model that uses man-made reasoning calculations to foresee occasions and practices dependent on inquiries posed over a board of specialists. At the point when one considers man-made reasoning, their brain promptly hops to the idea of robots. Current misguided judgments are that these frameworks represent an existential danger and are equipped for global control. The possibility of robots assuming control over the world stems from sci-fi authors and has made a cover of vulnerability encompassing the present state of computerized reasoning, generally instituted as the expression “simulated intelligence.” It is a piece of human instinct to tackle issues, particularly the issue of how to make cognizant, yet safe man-made reasoning frameworks. In spite of the fact that specialists caution that the advancement of computerized reasoning frameworks arriving at the multifaceted nature of human insight could present worldwide dangers and present phenomenal moral difficulties, the uses of man-made consciousness are differing and the conceivable outcomes broad, making the mission for genius worth endeavor. The possibility of computerized reasoning frameworks assuming control over the world ought to be left to sci-fi authors, while endeavors ought to be focused on their movement through AI weaponization, morals, and mix inside the economy and occupation showcase. Because of the authentic association between man-made consciousness and safeguard, an AI weapons contest is as of now under way. As opposed to restricting independence inside the military, man-made reasoning scientists ought to develop a security culture to help oversee improvements in this space. The most punctual weapon without human info—acoustic homing torpedoes—showed up in World War 2 furnished with gigantic power, as it could point itself by tuning in for trademark hints of its objective or in any event, following it utilizing sonar recognition. The acknowledgment of the potential such machines are equipped for excited the AI development. Nations are starting to intensely finance computerized reasoning activities with the objective of making machines that can advance military endeavors. In 2017, the Pentagon mentioned to allott $12 to 15 million dollars exclusively to support AI weapon innovation (Funding of AI Research). Moreover, as indicated by Yonhap News Agency, a South Korean news source, the South Korean government likewise declared their arrangement to burn through 1 trillion dollars by 2020 so as to support the man-made reasoning industry. The inclination to put resources into man-made brainpower weaponization shows the worth worldwide superpowers place on innovation. By and by, as firearm control and brutality turns into a problem that is begging to be addressed in America, the contention encompassing independent weapons is high. In this manner, the trouble in what establishes a “self-governing weapon” will block a consent to boycott these weapons. Since a boycott is probably not going to happen, legitimate administrative estimates must be set up by assessing every weapon dependent on its deliberate impacts as opposed to the way that it fits into the general class of self-ruling weapons. For instance, if a specific weapon improved strength and common security its ought to be invited. In any case, coordinating computerized reasoning into weapons is just a little segment of the potential military applications the United States is keen on as the Pentagon needs to utilize AI inside choice guides, arranging frameworks, coordinations, and reconnaissance (Geist). Self-sufficient weapons, being just a fifth of the AI military environment, demonstrates that most of uses give different advantages rather require exacting guideline to maintain control like weapons may. Truth be told, self-rule in the military is generally supported by the US government. Pentagon representative Roger Cabiness declares that America is against prohibiting self-rule and accepts that “independence can assist powers with meeting their legitimate and moral obligations at the same time” (Simonite). He assists his explanation that self-governance is basic to the military by expressing that “commandants can utilize exactness guided weapon frameworks with homing capacities to lessen the danger of regular citizen losses.” A cautious guideline of these obviously helpful frameworks is the initial move towards dealing with the AI weapons contest. Standards ought to be built up among AI analysts against adding to unwanted utilization of their work that could cause hurt. By setting up rules, it lays the preparation for arrangements between nations, making them structure bargains to renounce a portion of the warfighting capability of AI just as spotlight on explicit applications that improve shared security (Geist). Some even contend that guideline may not be essential. Amitai and Oren Etzioni, man-made reasoning specialists, look at the present state of man-made brainpower and talk about whether it ought to be controlled in the U.S in their ongoing work, “Should Artificial Intelligence Be Regulated?”. The Etzioni’s attest that the risk presented by AI isn’t up and coming as innovation has not propelled enough and innovation ought to be progressed until the idea of guideline is important. Also they express that when the possibility of guideline is vital, a “layered basic leadership framework ought to be executed” (Etzioni). On the base level are the operational frameworks doing different undertakings. Over that are a progression of “oversight frameworks” that can guarantee work is done in a predetermined way. Etzioni depicts operational frameworks just like the “working drones” or staff inside an office and the oversight frameworks as the bosses. For instance, an oversight framework, like those utilized in Tesla models furnished with Autopilot, on driverless autos would avert as far as possible from being abused. This equivalent framework could likewise be applied to self-sufficient weapons. For example, the oversight frameworks would keep AI from focusing on zones restricted by the United States, for example, mosques, schools, and dams. Furthermore, having a progression of oversight frameworks would keep weapons from depending on insight from just source, expanding the general security of self-ruling weapons. Forcing a solid framework rotating around security and guideline could expel the hazard from AI military applications, lead to sparing regular citizen lives, and increasing an upper edge in indispensable military battle. As AI frameworks are winding up progressively associated with the military and even day by day life, it is critical to consider the moral worries that man-made consciousness raises. Dark Scott, a main master in the field of developing innovations, accepts if AI keeps on advancing at its present rate, it is just a short time before computerized reasoning should be dealt with equivalent to people. Scott expresses, “The genuine inquiry is, when will we draft a man-made brainpower bill of rights? What will that comprise of? Furthermore, who will get the opportunity to choose that?”. Salil Shetty, Secretary General of Amnesty International, likewise concurs that there are huge potential outcomes and advantages to be picked up from AI if “human rights is a center plan and use standard of this innovation (Stark).” Within Scott and Shetty’s contention, they authenticate the misinterpretation that man-made reasoning, when comparable to human capacity, won’t be have the option to live among different people. Or maybe, if man-made consciousness frameworks are dealt with also to people with common rights at the focal point of significance during improvement, AI and people will have the option to collaborate well inside society. This perspective is as per the “Computerized reasoning: Potential Benefits and Considerations,” composed by the European Parliament, which keeps up that “simulated intelligence frameworks should capacity as per values that are adjusted to those of people” so as to be acknowledged into society and the planned condition of capacity. This is fundamental in self-governing frameworks, yet in forms that require human and machine joint effort since a misalignment in qualities could prompt ineffectual collaboration. The embodiment of the work by the European Parliament is that so as to receive the cultural rewards of self-governing frameworks, they should pursue the equivalent “moral standards, virtues, proficient codes, and social standards” that people would follow in a similar circumstance (Rossi). Self-governing vehicles are the main look into man-made consciousness that has discovered its way into regular day to day existence. Robotized vehicles are lawful due to the guideline “everything is allowed except if denied”. Since, as of not long ago there were no laws concerning mechanized vehicles, so it was splendidly legitimate to test self driving autos on interstates which helped progress innovation in the car business hugely. Tesla’s Autopilot framework is one that has upset the business, enabling the driver to expel their hands from the wheel as the vehicle remains inside the path, moves to another lane, and progressively changes speed contingent upon the vehicle in front. Be that as it may, with late Tesla Autopilot related mishaps, the spotlight is no longer on the usefulness of these frameworks, but instead their moral basic leadership capacity. In a hazardous circumstance where a vehicle is utilizing Autopilot, the vehicle must have the option to settle on the right and moral choice as found in the MIT Moral Machine venture. During this task, members were set in the driver’s seat of an independent vehicle to perceive what they would do whenever stood up to with an ethical quandary. For instance, questions, for example, “would you keep running over a couple of joggers over a couple of youngsters?” or “would you hit a solid divider to spare a pregnant lady, or a lawbreaker, or an infant?” were asked so as to make AI from the information and show it the “typically good” activity (Lee). The information mama