Comparison of TCP and UDP Client-Server APIs
The goal of this project is to compare Java API for client-server development. Specifically, you will compare Java Socket API and the Java Datagram API. These API’s will be compared from a programmer’s standpoint as well as an architectural standpoint. Thus, you will need to compare the performance obtained in each case (quantitative comparison), as well as make a qualitative comparison. While discussing the quantitative results obtained, back your conclusions with data to support your conclusions. Try to answer why you feel technique A is faster than technique B etc.
In addition, you will compare these APIs, based on your observations, in relation to the ease of understanding the technology; time required for development, difficulties encountered, stability of code etc. This list is not meant to be exhaustive. Be sure to include any pertinent conclusions based on your experiences.
Description of the Client-server interaction
You will implement a reliable FTP client-server system.
Provide a simple menu to the user, e.g.
1. GET
2. PUT
3. CD
4. QUIT
Your code must be robust and handle any incorrect user input. Since there can be multiple clients querying the server at the same time with each client being serviced by a different thread on the server, the server must ensure concurrency control on the data file while it is being updated. Thus, only one thread must gain access to the data file during writes (hint: use synchronized keyword for the write method in your code).
Be sure to terminate each thread cleanly after each client request has been serviced.
Project 1
Implement the project as described using Java Sockets (TCP). This will be a connection- oriented client-server system with reliability built-in since TCP provides guaranteed delivery.
Project 2
Implement the project as described using Java Datagrams (UDP). This will be a
connectionless client-server system since UDP is connectionless. However, you will need to provide reliability in your client-side application code since UDP does not guarantee
delivery.
In each project, be sure to measure the mean response time for the server to service the client request. To graph this, have your client make requests in the order of the following data/file sizes: 1 MB, 25 MB, 50 MB, 100 MB. First do this for the PUT command and then do this for the GET command. Determine the response time in each case, then plot the response time vs. offered load (file size) graph.
Next, plot the throughout versus offered load graph using the data from the GET graph/command. Throughput in this case is bytes delivered per second (bytes/second). Plot the bytes/second on the y-axis and offered load on the x-axis. So, at the end of each project, you will have 3 graphs: one graph for the response time for the GET command, one graph for the response time for the PUT command, and the throughput graph for the GET command.
Have your server print out diagnostic messages about what it is doing (e.g., “accepting a new connection”, etc.) Your code will be expected to deal with invalid user commands.
Sample Solution
s used to generate data describing congressional activity related to wild swine. FDsys is an official repository of all official publications from all three branches of the United States Federal Government and currently contains over 7.4 million electronic documents from 1969 to present. Our search included congressional hearings, congressional record, congressional reports, bills, and changes to the code of federal regulations from 1985 until 2013 when the APHIS National Feral Swine Damage Management Program was established. Documents included in our study contained any of the following terms: ‘feral swine’, ‘feral hog’, or ‘feral pig’, ‘wild swine’, ‘wild hog’, or ‘wild pig’. Each document was considered an independent policy action, and the number of documents by year was tallied to generate count data by document type, primary agricultural commodity (livestock or crop) the document addressed, and year. Our method may have included documents which were not specifically addressing wild swine related policy; to evaluate this assumption a 5% random sample was taken and the documents were classified as addressing wild swine related policy or not. Based on the results of this assessment we assumed that if the document contained reference to wild swine the issue of wild swine was either on the policy agenda or influencing the agenda in some way.
For our purposes we are interested in the cumulative influence of article tone and media sources. In order to produce a measure of this annual cumulative article tone we generated the annual mean tone. This was then multiplied by the number of articles published in the year and by the number of sources creating two predictor variables describing the annual tone for media sources (source tone) and the annual tone for articles (article tone). Classification of newspaper headlines and generation of the media tone indi