Comparison of TCP and UDP Client-Server APIs

 

The goal of this project is to compare Java API for client-server development. Specifically, you will compare Java Socket API and the Java Datagram API. These API’s will be compared from a programmer’s standpoint as well as an architectural standpoint. Thus, you will need to compare the performance obtained in each case (quantitative comparison), as well as make a qualitative comparison. While discussing the quantitative results obtained, back your conclusions with data to support your conclusions. Try to answer why you feel technique A is faster than technique B etc.

In addition, you will compare these APIs, based on your observations, in relation to the ease of understanding the technology; time required for development, difficulties encountered, stability of code etc. This list is not meant to be exhaustive. Be sure to include any pertinent conclusions based on your experiences.

Description of the Client-server interaction

You will implement a reliable FTP client-server system.

Provide a simple menu to the user, e.g.
1. GET
2. PUT
3. CD
4. QUIT

Your code must be robust and handle any incorrect user input. Since there can be multiple clients querying the server at the same time with each client being serviced by a different thread on the server, the server must ensure concurrency control on the data file while it is being updated. Thus, only one thread must gain access to the data file during writes (hint: use synchronized keyword for the write method in your code).

Be sure to terminate each thread cleanly after each client request has been serviced.

Project 1
Implement the project as described using Java Sockets (TCP). This will be a connection- oriented client-server system with reliability built-in since TCP provides guaranteed delivery.
Project 2
Implement the project as described using Java Datagrams (UDP). This will be a
connectionless client-server system since UDP is connectionless. However, you will need to provide reliability in your client-side application code since UDP does not guarantee

delivery.

In each project, be sure to measure the mean response time for the server to service the client request. To graph this, have your client make requests in the order of the following data/file sizes: 1 MB, 25 MB, 50 MB, 100 MB. First do this for the PUT command and then do this for the GET command. Determine the response time in each case, then plot the response time vs. offered load (file size) graph.

Next, plot the throughout versus offered load graph using the data from the GET graph/command. Throughput in this case is bytes delivered per second (bytes/second). Plot the bytes/second on the y-axis and offered load on the x-axis. So, at the end of each project, you will have 3 graphs: one graph for the response time for the GET command, one graph for the response time for the PUT command, and the throughput graph for the GET command.

Have your server print out diagnostic messages about what it is doing (e.g., “accepting a new connection”, etc.) Your code will be expected to deal with invalid user commands.

 

Sample Solution

s used to generate data describing congressional activity related to wild swine. FDsys is an official repository of all official publications from all three branches of the United States Federal Government and currently contains over 7.4 million electronic documents from 1969 to present. Our search included congressional hearings, congressional record, congressional reports, bills, and changes to the code of federal regulations from 1985 until 2013 when the APHIS National Feral Swine Damage Management Program was established. Documents included in our study contained any of the following terms: ‘feral swine’, ‘feral hog’, or ‘feral pig’, ‘wild swine’, ‘wild hog’, or ‘wild pig’. Each document was considered an independent policy action, and the number of documents by year was tallied to generate count data by document type, primary agricultural commodity (livestock or crop) the document addressed, and year. Our method may have included documents which were not specifically addressing wild swine related policy; to evaluate this assumption a 5% random sample was taken and the documents were classified as addressing wild swine related policy or not. Based on the results of this assessment we assumed that if the document contained reference to wild swine the issue of wild swine was either on the policy agenda or influencing the agenda in some way.

Media Data

To generate data on media reporting of wild swine related topics a systematic search of four major news consolidators was performed – Newsbank, LexisNexis, EBSCO, and ProQuest (EBSCO 2016, LexisNexis 2016, NewsBank 2016, ProQuest 2016). Our review was restricted to newspaper articles published from 1985 to 2013 in the United States. In order for an article to be included it must have contained the terms ‘feral swine’, ‘feral hog’, or ‘feral pig’, ‘wild swine’, ‘wild hog’, or ‘wild pig’ in the title or lead in to the article. Articles published by the same media source and author on the same date were considered duplicates and removed. The data were summarized generating three annual predictors, the number of articles, the number of different media sources, and the number of states with at least one article.

Each article headline was classified as positive or negative. Our assumption here was that the article headline summarized the overall content, or conclusion of the article. In order to classify articles as having positive or negative tone we used a polarity index described by Rinker (2013) and Breen (2012). In general this polarity algorithm uses a word sentiment (positive or negative) dictionary (Hu and Liu 2004) to tag polarized words in the article headline. A context cluster of six words is extracted from around each polarized word (positive / negative) in the article. The words in this cluster are identified as neutral, negator, amplifier, or de-amplifier. Neutral words hold no value but do affect word count, while each polarized word is counted and weighted in the context cluster. The context clusters for the article headline are summed and divided by the square root of the word count yielding an unbounded score for article describing the negative or positive tone of the headline.

For our purposes we are interested in the cumulative influence of article tone and media sources. In order to produce a measure of this annual cumulative article tone we generated the annual mean tone. This was then multiplied by the number of articles published in the year and by the number of sources creating two predictor variables describing the annual tone for media sources (source tone) and the annual tone for articles (article tone). Classification of newspaper headlines and generation of the media tone indi

This question has been answered.

Get Answer
WeCreativez WhatsApp Support
Our customer support team is here to answer your questions. Ask us anything!
👋 Hi, Welcome to Compliant Papers.