热门标签 | HotTags
当前位置:  开发笔记 > 编程语言 > 正文

第二次运行托管应用程序显示的性能与第一次不同-Runningamanagedapplication2ndtimeshowsdifferentperformancethan1st

IhaveabenchmarkingapplicationtotestouttheperformanceofsomeAPIsthatIhavewritten.Int

I have a benchmarking application to test out the performance of some APIs that I have written. In this benchmarking application, I am basically using the QueryPerformanceCounter and getting the timing by dividing the difference of QPC values after and before calling into the API, by the frequency. But the benchmarking results seems to vary If I run the application (same executable running on the same set of Dlls) from different drives. Also, on a particular drive, running the application for the 1st time, closing the application and re-running it again produces different benchmarking results. Can anyone explain this behavior? Am I missing something here?

我有一个基准测试应用程序来测试我编写的一些API的性能。在这个基准测试应用程序中,我基本上使用QueryPerformanceCounter并通过在频率调用API之前和之后除以QPC值的差异来获得时序。但基准测试结果似乎有所不同如果我从不同的驱动器运行应用程序(在同一组Dll上运行相同的可执行文件)。此外,在特定驱动器上,第一次运行应用程序,关闭应用程序并再次重新运行它会产生不同的基准测试结果。谁能解释这种行为?我在这里错过了什么吗?

Some more useful information:

一些更有用的信息:

The behavior is like this: Run the application, close it and rerun it again, the benchmarking results seems to improve on the 2nd run and thereafter remains same. This behavior more prominent in case of running from C drive. I would also like to mention that my benchmark app has an option to rerun/retest a particular API without having to close the app. I do understand that there is jitting involved, but what I dont understand is that on the 1st run of app, when u rerun an API multiple times without closing the app, the performance stabilizes after a couple of runs, then when you close and rerun the same test, the performance seems to improve.

行为是这样的:运行应用程序,关闭它并再次重新运行,基准测试结果似乎在第二次运行时得到改善,之后保持相同。在从C盘运行的情况下,此行为更加突出。我还想提一下,我的基准测试应用程序可以选择重新运行/重新测试特定的API,而无需关闭应用程序。我知道有涉及jitting,但我不明白的是,在第一次运行应用程序时,当你多次重新运行API而不关闭应用程序时,性能在几次运行后稳定,然后当你关闭并重新运行时同样的测试,性能似乎有所提升。

Also, how do you account for the performance change when run from different drives?

另外,如何在从不同驱动器运行时考虑性能变化?

[INFORMATION UPDATE]

I did an ngen and now the performance difference between the different runs from same location is gone. i.e. If I open the benchmark app, run it once, close it and rerun it from same location, it shows same values.

我做了一个ngen,现在来自同一位置的不同运行之间的性能差异消失了。即如果我打开基准测试应用程序,运行一次,关闭它并从同一位置重新运行它,它显示相同的值。

But I have encountered another problem now. When I launch the app from D drive and run it a couple of times (couple of iterations of APIs within the same launch of benchmark prog), and then from the 3rd iteration onwards, the performance of all APIs seems to fall by around 20%. Then If you close and relaunch the app and run it, for first 2 iterations, it gives correct values (same value as that obtained from C), then again performance falls beyond that. This behavior is not seen when run from C drive. From C drive, no matter how many runs you take, it is pretty consistent.

但我现在遇到了另一个问题。当我从D驱动器启动应用程序并运行它几次(在同一个基准测试版本中的几次API迭代),然后从第3次迭代开始,所有API的性能似乎下降了大约20% 。然后,如果您关闭并重新启动应用程序并运行它,前2次迭代,它会给出正确的值(与从C获得的值相同),然后性能再次超出该值。从C驱动器运行时看不到此行为。从C盘开始,无论您运行多少次,都非常一致。

I am using large double arrays to test my API performance. I was worried that the GC would kick in inbetween the tests so I am calling GC.Collect() & GC.WaitForPendingFinalizers() explictly before and after each test. So I dont think it has anything to do with GC.

我使用大型双数组来测试我的API性能。我担心GC会在测试之间启动,所以我在每次测试之前和之后明确地调用GC.Collect()和GC.WaitForPendingFinalizers()。所以我不认为它与GC有任何关系。

I tried using AQ time to know whats happening from 3rd iteration onwards, but funny thing is that When I run the application with AQ time profiling it, the performance does not fall at all.

我尝试使用AQ时间来了解从第3次迭代开始发生的事情,但有趣的是,当我使用AQ时间分析它运行应用程序时,性能根本没有下降。

The performance counter as does not suggest any funny IO activity.

性能计数器并不表示任何有趣的IO活动。

Thanks Niranjan

4 个解决方案

#1


I think there are a combination of effects here:

我认为这里有一系列效果:

Firstly, running the same function within the test harness multiple times, with the same data each time, will likely improve because:

首先,多次在测试工具中运行相同的功能,每次使用相同的数据,可能会改进,因为:

  • JIT compilation will optimise the code that is run most frequently to improve performance (as mentioned already by Cory Foy)
  • JIT编译将优化最常运行的代码以提高性能(如Cory Foy所述)

  • The program code will be in the disk cache (as mentioned already by Crashwork)
  • 程序代码将在磁盘缓存中(如Crashwork所述)

  • Some program code will be in the CPU cache if it is small enough and executed frequently enough
  • 如果程序代码足够小并且执行得足够频繁,那么它们将位于CPU缓存中

If the data is different for each run of the function within the test harness, this could explain why closing and running the test harness again improves results: the data will now also be in the disk cache, where it wasn't the first time.

如果测试工具中每次运行该功能的数据不同,这可以解释为什么再次关闭和运行测试工具可以改善结果:数据现在也将位于磁盘缓存中,而不是第一次。

And finally, yes, even if two 'drives' are on the same physical disk, they will have different performance: data can be read faster from the outside of the disk platter than the inside. If they are different physical disks, then the performance difference would seem quite likely. Also, one disk may be more fragmented than the other, causing longer seek times and slower data transfer rates.

最后,是的,即使两个“驱动器”位于同一物理磁盘上,它们也会有不同的性能:数据可以从磁盘盘片的外部读取得比内部更快。如果它们是不同的物理磁盘,那么性能差异似乎很可能。此外,一个磁盘可能比另一个磁盘碎片更多,导致更长的搜索时间和更慢的数据传输速率。

#2


Running an application brings its executable and other files from the hard drive into the OS's disk cache (in RAM). If it is run again soon afterwards, many of these files are likely to still be in cache. RAM is much faster than disk.

运行应用程序将其可执行文件和其他文件从硬盘驱动器带入操作系统的磁盘缓存(在RAM中)。如果它很快再次运行,其中许多文件可能仍然在缓存中。 RAM比磁盘快得多。

And of course one disk may be faster than another.

当然,一个磁盘可能比另一个磁盘更快。

#3


Yes. It's called Just-In-Time compiling. Basically your app is deployed as MSIL (the Microsoft Intermediate Language) and the first time it is run it gets converted to native code.

是。它被称为Just-In-Time编译。基本上,您的应用程序部署为MSIL(Microsoft中间语言),并且第一次运行时它将转换为本机代码。

You can always run NGen (see the above article), or have a warm up period in your performance testing scripts where it runs through the scenario a couple of times before actually benchmarking performance.

您可以随时运行NGen(请参阅上面的文章),或者在性能测试脚本中预热一段时间,在实际基准性能测试之前,它会在场景中运行几次。

#4


Also, other factors are probably coming into play. Filesystem caching on the machine, buffering of recently used data, etc.

此外,其他因素可能会发挥作用。机器上的文件系统缓存,最近使用的数据的缓冲等。

Best to run several tests (or several hundred!) and average out across the set, unless you're specifically measuring cold boot times.

除非您专门测量冷启动时间,否则最好运行多次测试(或几百次!)并在整个设备中取平均值。


推荐阅读
  • 本文介绍了九度OnlineJudge中的1002题目“Grading”的解决方法。该题目要求设计一个公平的评分过程,将每个考题分配给3个独立的专家,如果他们的评分不一致,则需要请一位裁判做出最终决定。文章详细描述了评分规则,并给出了解决该问题的程序。 ... [详细]
  • Java太阳系小游戏分析和源码详解
    本文介绍了一个基于Java的太阳系小游戏的分析和源码详解。通过对面向对象的知识的学习和实践,作者实现了太阳系各行星绕太阳转的效果。文章详细介绍了游戏的设计思路和源码结构,包括工具类、常量、图片加载、面板等。通过这个小游戏的制作,读者可以巩固和应用所学的知识,如类的继承、方法的重载与重写、多态和封装等。 ... [详细]
  • 本文主要解析了Open judge C16H问题中涉及到的Magical Balls的快速幂和逆元算法,并给出了问题的解析和解决方法。详细介绍了问题的背景和规则,并给出了相应的算法解析和实现步骤。通过本文的解析,读者可以更好地理解和解决Open judge C16H问题中的Magical Balls部分。 ... [详细]
  • [大整数乘法] java代码实现
    本文介绍了使用java代码实现大整数乘法的过程,同时也涉及到大整数加法和大整数减法的计算方法。通过分治算法来提高计算效率,并对算法的时间复杂度进行了研究。详细代码实现请参考文章链接。 ... [详细]
  • Java学习笔记之面向对象编程(OOP)
    本文介绍了Java学习笔记中的面向对象编程(OOP)内容,包括OOP的三大特性(封装、继承、多态)和五大原则(单一职责原则、开放封闭原则、里式替换原则、依赖倒置原则)。通过学习OOP,可以提高代码复用性、拓展性和安全性。 ... [详细]
  • Java中包装类的设计原因以及操作方法
    本文主要介绍了Java中设计包装类的原因以及操作方法。在Java中,除了对象类型,还有八大基本类型,为了将基本类型转换成对象,Java引入了包装类。文章通过介绍包装类的定义和实现,解答了为什么需要包装类的问题,并提供了简单易用的操作方法。通过本文的学习,读者可以更好地理解和应用Java中的包装类。 ... [详细]
  • 先看官方文档TheJavaTutorialshavebeenwrittenforJDK8.Examplesandpracticesdescribedinthispagedontta ... [详细]
  • 本文讨论了如何在codeigniter中识别来自angularjs的请求,并提供了两种方法的代码示例。作者尝试了$this->input->is_ajax_request()和自定义函数is_ajax(),但都没有成功。最后,作者展示了一个ajax请求的示例代码。 ... [详细]
  • iOS Swift中如何实现自动登录?
    本文介绍了在iOS Swift中如何实现自动登录的方法,包括使用故事板、SWRevealViewController等技术,以及解决用户注销后重新登录自动跳转到主页的问题。 ... [详细]
  • Whatsthedifferencebetweento_aandto_ary?to_a和to_ary有什么区别? ... [详细]
  • 欢乐的票圈重构之旅——RecyclerView的头尾布局增加
    项目重构的Git地址:https:github.comrazerdpFriendCircletreemain-dev项目同步更新的文集:http:www.jianshu.comno ... [详细]
  • 本文介绍了MVP架构模式及其在国庆技术博客中的应用。MVP架构模式是一种演变自MVC架构的新模式,其中View和Model之间的通信通过Presenter进行。相比MVC架构,MVP架构将交互逻辑放在Presenter内部,而View直接从Model中读取数据而不是通过Controller。本文还探讨了MVP架构在国庆技术博客中的具体应用。 ... [详细]
  • 使用圣杯布局模式实现网站首页的内容布局
    本文介绍了使用圣杯布局模式实现网站首页的内容布局的方法,包括HTML部分代码和实例。同时还提供了公司新闻、最新产品、关于我们、联系我们等页面的布局示例。商品展示区包括了车里子和农家生态土鸡蛋等产品的价格信息。 ... [详细]
  • 深入理解Java虚拟机的并发编程与性能优化
    本文主要介绍了Java内存模型与线程的相关概念,探讨了并发编程在服务端应用中的重要性。同时,介绍了Java语言和虚拟机提供的工具,帮助开发人员处理并发方面的问题,提高程序的并发能力和性能优化。文章指出,充分利用计算机处理器的能力和协调线程之间的并发操作是提高服务端程序性能的关键。 ... [详细]
  • Introduction(简介)Forbeingapowerfulobject-orientedprogramminglanguage,Cisuseda ... [详细]
author-avatar
手机用户2502914467
这个家伙很懒,什么也没留下!
Tags | 热门标签
RankList | 热门文章
PHP1.CN | 中国最专业的PHP中文社区 | DevBox开发工具箱 | json解析格式化 |PHP资讯 | PHP教程 | 数据库技术 | 服务器技术 | 前端开发技术 | PHP框架 | 开发工具 | 在线工具
Copyright © 1998 - 2020 PHP1.CN. All Rights Reserved | 京公网安备 11010802041100号 | 京ICP备19059560号-4 | PHP1.CN 第一PHP社区 版权所有