设计 任务书 文档 开题 答辩 说明书 格式 模板 外文 翻译 范文 资料 作品 文献 课程 实习 指导 调研 下载 网络教育 计算机 网站 网页 小程序 商城 购物 订餐 电影 安卓 Android Html Html5 SSM SSH Python 爬虫 大数据 管理系统 图书 校园网 考试 选题 网络安全 推荐系统 机械 模具 夹具 自动化 数控 车床 汽车 故障 诊断 电机 建模 机械手 去壳机 千斤顶 变速器 减速器 图纸 电气 变电站 电子 Stm32 单片机 物联网 监控 密码锁 Plc 组态 控制 智能 Matlab 土木 建筑 结构 框架 教学楼 住宅楼 造价 施工 办公楼 给水 排水 桥梁 刚构桥 水利 重力坝 水库 采矿 环境 化工 固废 工厂 视觉传达 室内设计 产品设计 电子商务 物流 盈利 案例 分析 评估 报告 营销 报销 会计
 首 页 机械毕业设计 电子电气毕业设计 计算机毕业设计 土木工程毕业设计 视觉传达毕业设计 理工论文 文科论文 毕设资料 帮助中心 设计流程 
垫片
您现在所在的位置:首页 >>文科论文 >> 文章内容
                 
垫片
   我们提供全套毕业设计和毕业论文服务,联系微信号:biyezuopin QQ:2922748026   
Performance Comparison of Java EE and ASP.NET Core Technologies for Web API Development
文章来源:www.biyezuopin.vip   发布者:毕业作品网站  

Performance Comparison of Java EE and ASP.NET Core Technologies for Web API Development

Author: Liu Jinghua

Source: Performance Comparison of Java EE and ASP.NET Core Technologies for Web API Development [J]Applied Computer SystemsVolume . 2018

Both Java EE (Java Platform, Enterprise Edition), developed by Oracle, and ASP.NET (Active Server Pages .NET), developed by Microsoft, offer features fit for the creation of web-based applications. However, in recent history, Java EE has had better cross-platform support – it is possible to install the Hotspot implementation of the JVM (Java Virtual Machine), which is supported by Oracle, on both Windows and GNU/Linux operating systems. However, when dealing with .NET, the full .NET framework does not run-on GNU/Linux and Mono must be used, which was originally an open-source project and was only acquired by Microsoft in 2016 [1]. It does not offer support for WPF (Windows Presentation Foundation), WWF (Windows Workflow Foundation), while offering limited support for WCF (Windows Communication Foundation) and ASP.NET [2]. However, with the release of .NET Core in 2016 and, subsequently, the ASP.NET Core [3], Microsoft is supporting more operating systems. Now, as there is a first-party CLR (Common Language Runtime) implementation available on GNU/Linux, in addition to a modern rewrite of ASP.NET and a new web server – Kestrel [4], it would be beneficial to reevaluate which technology stack is better for new projects.

The evaluation can be performed by examining their differences, i.e., how the Kestrel web server is different frames, which it is supposed to replace, and the most popular Java web servers, such as Apache Tomcat [5], how the runtime performance differs in typical use cases, running under similar, commonly utilized configurations. The present paper describes an implementation of a system, which is to be used for running organic benchmarks (real-world tests) and collecting their results, offering immediate visual feedback to the user. The main goal of the benchmarking is to gain an approximation of how performant both technology stacks are on the GNU/Linux operating system and to highlight any obvious differences. General guidelines are also laid out for the software architecture and implementation practices to ensure the capability of generating hundreds of concurrent requests and efficiently processing them, as well as handling any errors.

A common REST (Representational State Transfer) API (Application Programming Interface), which uses JSON (JavaScript Object Notation) for data transfer is described and implemented in both technologies and deployed on identical servers. A separate application, consisting of a front-end for test configuration written in Angular 5, and a back-end service for test execution and result processing, written in Express and Node.js, which uses Redis for temporary storage and MySQL for result logging, are also created. This system is designed modularly – the servers implementing the testing APIs can be configured in the front-end interface, in addition to configuring Redis and MySQL logging. While no claims are made that the results will be objective, the system should serve as a starting point, allowing for extensibility – adding more servers, which can run different languages and software or hardware configurations, with no code changes, or extending the list of benchmarks to be run, should there be necessity for more specific tests in the future.

Java EE (Java Platform, Enterprise Edition) is a superset of the Java SE (Java Platform, Standard Edition), which extends the general-purpose Java APIs to provide features, which are useful in an enterprise setting, such as dependency injection (CDI, EJB), transaction management (JTA) and dynamic webpage functionality (JSP, JSF), as well as features for creating web services (JAX-RS, JAX-WS), at the same time also shortening the development time and the software complexity (Fig. 1) [6]. The development is organized with the Java Community Process (JCP) and based on Java Specification Requests (JSR). The present paper describes a setup that uses Java EE 7, which was released in 2013 [7]. The Java EE 7 platform can be further divided into the Full Platform and Web Profile, the purpose of which is to provide a more limited set of features, which is easier to support [8]. The developed API implementation takes advantage of Servlets, JSON, CDI and JAX-RS, thus using elements of the Full Platform specification. Java EE 7 was chosen, because at the time of writing, Java EE 8 adoption was still limited, only Glassfish 5.0 and Payara 5 supported the specification, neither of which was as popular as Apache Tomcat, upon which Apache TomEE was based.

ASP.NET (Active Server Pages .NET) is a web application framework that is developed by Microsoft and utilizes the CLR [9]. The features provided are similar to Java EE, for example, Web Forms allows creating dynamic content, in a similar fashion to JSP and JSF (Fig. 2) [10]. A number of components, notably ASP.NET Web API, ASP.NET AJAX, ASP.NET MVC provide various functions [11]. ASP.NET is typically run-on IIS (Internet Information Services), which is not supported on GNU/Linux by Microsoft; thus, Kestrel must be used instead. Fig. 2. A diagram of .NET Framework Architecture with ASP.NET displayed.

ASP.NET Core is the next generation of ASP.NET developed by Microsoft and community contributors [12] and can be run on both the full .NET framework and the .NET Core platform. It is a rewrite, which is meant to provide a slimmer, but more up-to-date set of features, as well as a new lightweight web server, Kestrel (although using IIS is still possible on Windows). It supports multiple components – Entity Framework Core, MVC Core and Razor Core [13], amongst others, which act as alternatives to those found in ASP.NET

To ensure optimal results, without interference caused by the configuration of the used operating system, some guidelines are

defined for the design of the system:

1. The implementations for the APIs are to be placed on separate servers. In this case, VPS (virtual private servers) are used.

2. Both servers should have the same type and version of operating system. In this case, Ubuntu 16.04.4 LTS was chosen.

3. To ensure that the servers perform equally, they are to be tested by synthetic benchmarks. Both are to be subjected to the same types of benchmarks. Here, Sy bench 0.4.12 was used.

4. The servers should run the same software with the same updates, differing only in the packages that are required to run the implementations – in this case, OpenJDK and .NET Core.

5. To test how well the web servers of the specific technology stacks perform (Apache TomEE and Kestrel), no reverse proxies (such as Apache httpd or Nginx) are to be used. Requirements were also laid out for how the testing should be conducted to prevent memory leaks and errors caused by the implementation of the testing system itself:

1. The service that is invoking the benchmarks should not have its load capacity exceeded. This is ensured by each of the APIs being tested sequentially to prevent the service from being affected by the load, while it is still possible to run multiple iterations of a single test in parallel.

2. It should be possible to easily add and remove tests to be run, using a scheduler – this is implemented in the front end, thus not limiting the testing service to sequential execution, but only utilizing it in such a manner. This allows for future-proofing, should there ever be a requirement for the concurrent testing of multiple APIs.

3. The exchange of data that are not relevant among the layers of the system (APIs, testing service and the front end) should be minimized. Here, the contents of the testing requests are generated on the testing service itself and are only exchanged with the APIs, which are to be tested. The front-end only receives the results.

4. The test results should be chunked and provided to the front-end upon request, before the completion of all of the scheduled test iterations, however, should also expire after a set amount of time if not retrieved.

5. The web interface should be lightweight. This is achieved by grouping the results and showing the averages in the groups when over 100 iterations are run, to prevent the chart framework from negatively affecting the per for mace. The MySQL database can be used for a finer analysis of the results.

The system is composed of several components, each using common technologies to re-create configurations, which could be found in real-world usage. A. Java API Implementation The Java API is implemented using Java EE features and avoiding third party frameworks, such as Spring, where possible. It is running on Apache TomEE Web Profile 7.0.2, which provides the functionality of Java EE 7 Web Profile using open-source components (Opine, Apache CXF, etc.) [14]. It is based on Apache Tomcat – a popular application container [15]. It is run through OpenJDK, version 1.8.0_151. B. ASP.NET Core API Implementation The ASP.NET Core API is implemented using ASP.NET Core features and avoiding third party frameworks, where possible. It is running on Kestrel web server 2.0.1. The distribution is run through .NET Core, version 2.1.3. C. Node.js + Express Testing Service The testing service is running on Node.js v9.2.1 and based on Express 4.15.5. It uses cores, redid, express-redid and request promise-native packages (installed through mph), amongst others, to provide the necessary functionality. D. Angular 5 Web Interface The front-end is created with Angular 5.0.0, TypeScript 2.5.3 and Bootstrap 4, which is used for providing styling and behavior of the user interface, in combination with jQuery 3.2.1. In addition, ng2-charts is used to serve as a bridge between Angular and Chart.js, a framework that provides HTML5-based graph display capabilities [16].

The front-end serves as a façade to the rest of the system and allows configuring the Redis and MySQL instances to use, as well as the servers to be subjected to testing. It does not handle generating the test request contents, but makes schedule invoking the testing service in the back-end. The application structure is based on a single Angular module, which has services for storing data about settings and tests, the latter of which contain both the test entries themselves – with data about the test type, iterations and other parameters – and the servers that are to be used, each test having a reference to one of the server objects. The rest of the app is composed of utility classes (such as enumerable, or notification components) and components for displaying the data and organizing input and output – tab, menu and chart components. The interface is tabbed to display only the information that is relevant to the user at any given moment – there are tabs for running tests and displaying their results, as well as configuring the servers themselves, and changing the system settings. The web interface, with the test tab open, which lists the available test types. The web interface, with the server configuration tab open, which lists the current servers. Lastly, the benchmarking process also attempts to divide the request run time into its components, which are also displayed differently in the graphs – the time that a request spends on the network and the time that it spends being processed. The horizontal axis displays the iteration or a span of iterations (if more than 100 iterations are run), whereas the vertical axis displays the execution time, in milliseconds. A part of the web interface, displaying the results of a test and controls for changing its parameters. This is achieved through self-reporting by the systems and is displayed in the form of a stacked bar chart, in which the bottom bar displays the reported test execution time on the API, while the top bar displays the remainder of the time, which it took for the request to reach the testing service (which is calculated by subtracting the execution time from the total time).

Layers of the system communicate using HTTP requests, which are provided by the HTTP Client class in combination with the JavaScript JSON class on the Angular side, and Express routing with body-parser on the back-end. This format was chosen because it allowed for easy information exchange between the layers, as they use the contents as any other JavaScript object. Fig. 6. A diagram of network requests, with Redis shown as an internal part of the back-end as a possible configuration. Two paths are exposed by the testing service: /schedule and /results, the former of which allows scheduling a new test for execution, while the latter can be polled by the front-end to periodically receive and clear the result list stored in Redis, as well as check whether the execution has finished. Both utilize Redis for temporarily storing data, chosen because of its performance, as it uses RAM (random access memory) for temporary key-value storage. It is accessed through the express redid library, which mirrors the official API and allows for atomic access [17]. The only exception is setting the TTL valuator lists, which is done in a separate call, to make the values expire should they not be requested in a certain amount of time. For debugging purposes, all the communication between the layers of the system can be logged, either in the browser console or the Node.js output, which is redirected to a file. When starting a test on the testing service, all of the relevant data to its execution must be passed in the first request: its kind, the iterations to be run, its unique identifier, information about the testing API to be used, as well as the configuration data for Redis and MySQL.

The response to a request for test results contains information about whether it is finished, as well as an array of the results. The results include the contents as well as the response of the request, the specific iteration, which the entry describes, as well as information about errors, should any have occurred (in the form of the contents of a stack trace of the language used). Timestamps are also included for measuring execution times. The received timestamps come in pairs, the source ones are generated by the Node.js server, whereas the target ones are generated by the test API. Only their subtraction is important, so they do not have to match, allowing for server time configurations to differ without impacting the functionality. After reviewing the data, a few points of interest appear, which require further explanation.

A. ASP.NET Core Test Success Percentage While Java was able to serve 100 % of the requests that were made to it, across all levels of parallelization and test types, this was not the case with ASP.NET Core. Although in most test types its success rate was the same, it was unable to serve all of the requests while generating the large static text responses (over 1 megabyte in size) and while generating the SHA256 hashes. This can be explained with the hardware constraints (the amount of RAM available to the servers) causing the API runtime to be terminated by the operating system, after exhausting all of the available resources. Output of hop on the server running the ASP.NET Core implementation, shortly before a crash. While the implementations of the StringBuilder concept, provided by the standard libraries, were used in both implementations, for both Java and ASP.NET Core, it appeared that ASP.NET Core’s usage of memory was slightly less optimized; therefore, it could not successfully run the tests with the resources provided. This behavior was also reproducible for other test types, with increased request for data size.

B. ASP.NET Core Static Text Benchmark Results It appeared that the ASP.NET Core text generation benchmark not only had worse success rates, but also much greater test execution times, which can be attributed to the network time component of the results – ASP.NET Core took 2096 milliseconds, on average, to receive and send responses, compared to Java’s average of 679 milliseconds, a difference by a factor of approximately 3. One possible explanation is that the Kestrel web server, which is used by ASP.NET Core, currently is worse optimized when compared to Apache Tomcat and Apache TomEE, as it is relatively new. This is evidenced not only by the fact that this particular test involved sending and receiving the largest packets, but also by the fact that the network times for the ASP.NET API implementation were higher in 13 of the 15 benchmarks.

C. Java JSON Write Benchmark Results Another data point that could attract attention is the average request processing time for the JSON write benchmark, which was run on Java. This average of 4727 milliseconds is not only much longer than that of the ASP.NET Core implementation – 64 milliseconds, but also the longest one in any of the benchmarks run across all of the test types and technologies used. This could be explained by the fact that for both languages, the JSON for the test requests and responses were both generated and parsed dynamically, in contrast to the more traditional approach of generating static entities. While this sacrifices type safety to a degree, it allows for much greater flexibility and faster development time, which is why this approach was chosen. This is where a difference between the languages manifests itself – C#, which is the language in which the ASP.NET Core implementation was written, supports the dynamic data type, while Java offers no such a feature. In Java, the Json class was used, which provided support for creating object and array builders, in addition to reading JSON data. This does, however, come with the disadvantage of an object that needs to be created for every item, which must be serialised. This appears to be noticeable when working with a deep, nested structure, because of which the performance degraded, even if it caused no failures. For example, code in C# can look like List<dynamic> generatedObjectsArray  = new List<dynamic>(); return generatedObjectsArray.ToArray(); While in Java, the following must be done JsonArrayBuilder generatedObjectsArray  = Json.createArrayBuilder(); return generatedObjectsArray.build();

After examining the results, some conclusions can be made, as well as suggestions for improvement of the testing process and methodologies used.

A. Test API Implementations While using the dynamic data type features to circumvent having to write resource objects for each of the different benchmarks saves time, it can lead to sub-optimal results, as it was in the case of the encountered instability. Testing should also be done, while following the approach of defining entity classes in the future to see whether the Java performance improves. Additionally, the performance of the technologies used could be compared on servers running Windows as well, where IIS is also available and could provide different results in relation to Kestrel. For a deeper insight, it would be useful to also log the server resource usage and attempt testing on more powerful hardware configurations, to allow for greater parallelization and to see how multi-core processor utilization differs.

B. Runtime While problems with ASP.NET Core memory management appeared, the overall performance of both technologies was subjectively similar and comparable – neither was noticeably slower or faster than the other in all of the test types. It would stand to reason, that because of this result, the choice of algorithms utilised to solve problems and, in turn, the code contained in the libraries, which might be used in the development process, could have a greater impact, as opposed to choosing either of the technologies because of otherwise insignificant differences in performance. Scientific studies performed in the past, as well as more contemporary attempts at benchmarking [22] seem to indicate that the performance of Java (and Java EE), as well as C# (and thus ASP.NET and ASP.NET Core) depends on particular tasks they are applied to. While ASP.NET Core was faster at processing the requests, it appeared that the Kestrel web server took longer to deliver the responses in almost all of the cases. It should also be noted that in scenarios where no blocking processes were present, such as waiting for a database to return results, or reading data from a disk, it appeared that a request would spend the majority of the time travelling through the network, as the averages of the processing times were noticeably smaller than those of the network times. It should be noted that ASP.NET Core is a new technology and is in active development. As such, it is not as mature as Java EE yet, and it is subject to change. 

  全套毕业设计论文现成成品资料请咨询微信号:biyezuopin QQ:2922748026     返回首页 如转载请注明来源于www.biyezuopin.vip  

                 

打印本页 | 关闭窗口
本类最新文章
Mechatronics and Influence Resear DEPRESSION DETEC
Facial Expressio DEPRESSION DETEC Visually Interpr
| 关于我们 | 友情链接 | 毕业设计招聘 |

Email:biyeshejiba@163.com 微信号:biyezuopin QQ:2922748026  
本站毕业设计毕业论文资料均属原创者所有,仅供学习交流之用,请勿转载并做其他非法用途.如有侵犯您的版权有损您的利益,请联系我们会立即改正或删除有关内容!