并发编程c ++?(Concurrent programming c++? [closed])
我不断听到每个地方的并发编程。 你们可以点亮它是什么,以及c ++新标准如何促进这样做?
I keep on hearing about concurrent programing every where. Can you guys throw some light on what it's and how c++ new standards facilitate doing the same?
原文:https://stackoverflow.com/questions/218786
最满意答案
curvefit
给你一个常数(一条平线)的原因是因为你使用你定义的模型传递了一个不相关的数据集!让我先重新创建你的设置:
argon = np.genfromtxt('argon.dat') copper = np.genfromtxt('copper.dat') f1 = 1 - np.exp(-argon[:,1] * 1.784e-3 * 6.35) f2 = np.exp(-copper[:,1] * 8.128e-2 * 8.96)
现在请注意,
f1
基于文件argon.dat
数据的第二列。 它与第一列无关,虽然没有什么可以阻止你绘制第二列的修改版本与第一列的比较,这就是你绘制时所做的:import matplotlib.pyplot as plt from scipy.optimize import curve_fit plt.semilogy(copper[:,0]*1000, f2, 'r-') # <- f2 was not based on the first column of that file, but on the 2nd. Nothing stops you from plotting those together though... plt.semilogy(argon[:,0]*1000, f1, 'b--') plt.ylim(1e-6,1) plt.xlim(0, 160) def model(x, a, b, offset): return a*np.exp(-b*x) + offset
备注:在您的模型中,您有一个名为
b
的参数未使用。 传递给拟合算法总是一个坏主意。 摆脱它。现在就是诀窍:你使用指数模型基于第二列创建了
f1
。 所以你应该将curve_fit
作为第二列作为自变量(在函数的doc-string中标记为xdata
),然后将f1
作为因变量。 喜欢这个:popt1, pcov = curve_fit(model, argon[:,1], f1) popt2, pcov = curve_fit(model, cupper[:,1], f2)
这将非常有效。
现在,当您想要绘制2个图形的乘积的平滑版本时,您应该从自变量中的公共区间开始。 对你而言,这是光子能量。 两个数据文件中的第二列取决于:有一个函数(一个用于氩,另一个用于铜),它将
μ/ρ
与光子能量相关联。 因此,如果您有很多能量数据点,并且您设法获得这些功能,那么μ/ρ
将有许多数据点。 虽然这些功能是未知的,但我能做的最好的事情就是简单地进行插值。 但是,数据是对数的,因此需要对数插值,而不是默认的线性。所以现在,继续获得大量的光子能量数据点。 在数据集中,能量点呈指数增长,因此您可以使用
np.logspace
创建一组不错的新点:indep_var = argon[:,0]*1000 energy = np.logspace(np.log10(indep_var.min()), np.log10(indep_var.max()), 512) # both argon and cupper have the same min and max listed in the "energy" column.
它的优势在于两个数据集中的能量具有相同的最小值和最大值。 否则,您将不得不减少此日志空间的范围。
接下来,我们(对数)插值关系
energy -> μ/ρ
:interpolated_mu_rho_argon = np.power(10, np.interp(np.log10(energy), np.log10(indep_var), np.log10(argon[:,1]))) # perform logarithmic interpolation interpolated_mu_rho_copper = np.power(10, np.interp(np.log10(energy), np.log10(copper[:,0]*1000), np.log10(copper[:,1])))
这是对刚刚完成的工作的直观表示:
f, ax = plt.subplots(1,2, sharex=True, sharey=True) ax[0].semilogy(energy, interpolated_mu_rho_argon, 'gs-', lw=1) ax[0].semilogy(indep_var, argon[:,1], 'bo--', lw=1, ms=10) ax[1].semilogy(energy, interpolated_mu_rho_copper, 'gs-', lw=1) ax[1].semilogy(copper[:,0]*1000, copper[:,1], 'bo--', lw=1, ms=10) ax[0].set_title('argon') ax[1].set_title('copper') ax[0].set_xlabel('energy (keV)') ax[0].set_ylabel(r'$\mu/\rho$ (cm²/g)')
标有蓝点的原始数据集已经过精细插值。
现在,最后的步骤变得简单了。 因为已经找到了将
μ/ρ
映射到某个指数变量(我已重命名为f1
和f2
的函数)的模型参数,所以它们可用于生成存在的数据的平滑版本,以及作为这两个功能的产物:plt.figure() plt.semilogy(energy, model(interpolated_mu_rho_argon, *popt1), 'b-', lw=1) plt.semilogy(argon[:,0]*1000, f1, 'bo ') plt.semilogy(copper[:,0]*1000, f2, 'ro ',) plt.semilogy(energy, model(interpolated_mu_rho_copper, *popt2), 'r-', lw=1) # same remark here! argon_copper_prod = model(interpolated_mu_rho_argon, *popt1)*model(interpolated_mu_rho_copper, *popt2) plt.semilogy(energy, argon_copper_prod, 'g-') plt.ylim(1e-6,1) plt.xlim(0, 160) plt.xlabel('energy (keV)') plt.ylabel(r'$\mu/\rho$ (cm²/g)')
你去吧 总结一下:
- 生成足够数量的自变量数据点以获得平滑结果
- 插值关系
photon energy -> μ/ρ
- 将函数映射到插值
μ/ρ
The reason
curvefit
is giving you a constant (a flat line), is because you're passing it a dataset that is uncorrelated using the model you have defined!Let me recreate your setup first:
argon = np.genfromtxt('argon.dat') copper = np.genfromtxt('copper.dat') f1 = 1 - np.exp(-argon[:,1] * 1.784e-3 * 6.35) f2 = np.exp(-copper[:,1] * 8.128e-2 * 8.96)
Now notice that
f1
is based on the 2nd column of the data in the fileargon.dat
. It is NOT related to the first column, although nothing stops you from plotting a modified version of the 2nd column vs the first of course, and that is what you did when you plot:import matplotlib.pyplot as plt from scipy.optimize import curve_fit plt.semilogy(copper[:,0]*1000, f2, 'r-') # <- f2 was not based on the first column of that file, but on the 2nd. Nothing stops you from plotting those together though... plt.semilogy(argon[:,0]*1000, f1, 'b--') plt.ylim(1e-6,1) plt.xlim(0, 160) def model(x, a, b, offset): return a*np.exp(-b*x) + offset
Remark: in your model you had a parameter called
b
that was unused. That is always a bad idea to pass to a fitting algorithm. Get rid of it.Now here's the trick: you made
f1
based on the 2nd column, using an exponentional model. So you should passcurve_fit
the 2nd column as the independent variable (which is labelled asxdata
in the function's doc-string) and thenf1
as the dependent variable. Like this:popt1, pcov = curve_fit(model, argon[:,1], f1) popt2, pcov = curve_fit(model, cupper[:,1], f2)
And that will work perfectly well.
Now, when you want to plot a smooth version of the product of the 2 graphs, you should start from a common interval in the independent variable. For you, this is the photon energy. The 2nd column in both datafiles depends on that: there is a function (one for argon, another for copper) that relates the
μ/ρ
to the photon energy. So, if you have lots of datapoints for the energy, and you managed to get those functions, you will have many datapoints forμ/ρ
. As those functions are unknown though, the best thing I can do is to simply interpolate. However, the data is logarithmic, so logarithmic interpolation is required, not the default linear.So now, continue by getting lots of datapoints for the photon energy. In the dataset, the energypoints are exponentially increasing, so you can create a decent new set of points by using
np.logspace
:indep_var = argon[:,0]*1000 energy = np.logspace(np.log10(indep_var.min()), np.log10(indep_var.max()), 512) # both argon and cupper have the same min and max listed in the "energy" column.
It works to our advantage that the energy in both datasets have the same minimum and maximum. Otherwise, you would have had to reduce the range of this logspace.
Next, we (logarithmically) interpolate the relation
energy -> μ/ρ
:interpolated_mu_rho_argon = np.power(10, np.interp(np.log10(energy), np.log10(indep_var), np.log10(argon[:,1]))) # perform logarithmic interpolation interpolated_mu_rho_copper = np.power(10, np.interp(np.log10(energy), np.log10(copper[:,0]*1000), np.log10(copper[:,1])))
Here's a visual representation of what has just been done:
f, ax = plt.subplots(1,2, sharex=True, sharey=True) ax[0].semilogy(energy, interpolated_mu_rho_argon, 'gs-', lw=1) ax[0].semilogy(indep_var, argon[:,1], 'bo--', lw=1, ms=10) ax[1].semilogy(energy, interpolated_mu_rho_copper, 'gs-', lw=1) ax[1].semilogy(copper[:,0]*1000, copper[:,1], 'bo--', lw=1, ms=10) ax[0].set_title('argon') ax[1].set_title('copper') ax[0].set_xlabel('energy (keV)') ax[0].set_ylabel(r'$\mu/\rho$ (cm²/g)')
The original dataset, marked with blue dots, has been finely interpolated.
Now, the last steps become easy. Because the parameters of your model that maps
μ/ρ
to some exponential variant (the functions that I have renamed asf1
andf2
) have been found already, they can be used to make a smooth version of the data that was present, as well as the product of both these functions:plt.figure() plt.semilogy(energy, model(interpolated_mu_rho_argon, *popt1), 'b-', lw=1) plt.semilogy(argon[:,0]*1000, f1, 'bo ') plt.semilogy(copper[:,0]*1000, f2, 'ro ',) plt.semilogy(energy, model(interpolated_mu_rho_copper, *popt2), 'r-', lw=1) # same remark here! argon_copper_prod = model(interpolated_mu_rho_argon, *popt1)*model(interpolated_mu_rho_copper, *popt2) plt.semilogy(energy, argon_copper_prod, 'g-') plt.ylim(1e-6,1) plt.xlim(0, 160) plt.xlabel('energy (keV)') plt.ylabel(r'$\mu/\rho$ (cm²/g)')
And there you go. To summarize:
- generate a sufficient amount of datapoints of the independent variable to get smooth results
- interpolate the relationship
photon energy -> μ/ρ
- map your function to the interpolated
μ/ρ
相关问答
更多-
对于拟合y = A + B log x ,只适合y (log x )。 >>> x = numpy.array([1, 7, 20, 50, 79]) >>> y = numpy.array([10, 19, 30, 35, 51]) >>> numpy.polyfit(numpy.log(x), y, 1) array([ 8.46295607, 6.61867463]) # y ≈ 8.46 log(x) + 6.62 对于拟合y = Ae Bx ,取两边的对数给出log y = log A + B ...
-
我遇到的问题是在数组中。 我决定查看实际的数组,看看它们是否是问题。 当我在't'和'ccd1'周围包括括号时,它将数组的长度设置为等于1。 取出括号修复了这个问题。 代码中还有其他不相关的问题应该很容易解决。 The problem I was encountering was in the arrays. I decided to look at the actual arrays to see if they were the issue. When I included brackets aroun ...
-
curvefit给你一个常数(一条平线)的原因是因为你使用你定义的模型传递了一个不相关的数据集! 让我先重新创建你的设置: argon = np.genfromtxt('argon.dat') copper = np.genfromtxt('copper.dat') f1 = 1 - np.exp(-argon[:,1] * 1.784e-3 * 6.35) f2 = np.exp(-copper[:,1] * 8.128e-2 * 8.96) 现在请注意, f1基于文件argon.dat数据的第二列。 ...
-
最简单的方法是将对数缩放应用于绘图。 你当然知道log(exp(x))= x,即如果你将log()应用于你的y值并绘制你应该得到一个线性图。 完成后,您可以使用线性工具箱( 高斯最小二乘法 )。 得到的斜率是exp(ax)中的前因子,您试图获得它。 如果您对x轴有另一种依赖关系,那么制作数据的对数日志图以确定所有依赖关系可能是有益的。 easiest way is to apply logarithmic scaling to your plot. As you certainly know log(exp ...
-
使用Python将指数修改的高斯曲线拟合到数据(Fitting an exponential modified gaussian curve to data with Python)[2022-10-10]
指数修改的高斯被定义为左边的偏斜分布,因此,形状参数不会改变该偏斜的方向。 这就是我的尝试。 data.reverse() popt,pcov=(curve_fit(fit_func, n.linspace(0,1,100), data)) fitted_curve=list(fit_func(n.linspace(0,1,100),popt[0],popt[1],popt[2])) data.reverse() fitted_curve.reverse() 数据图 ... -
确保你有最新版本的scikit; 我有不同的系数给你: Slope: [ 0.69314718] Intercept: 4.4408920985e-16 你需要采用整个表达式的exp ,而不仅仅是x项: In [17]: np.exp(0.69314718*1.1 + 4.4408920985e-16) Out[17]: 2.1435469237522917 Make sure you've got the latest version of scikit; I got different coef ...
-
指数拟合在ggplot R中(exponential fit in ggplot R)[2022-09-16]
设置数据: dd <- data.frame(x=c(1981,1990,2000:2013), y = c(3.262897,2.570096,7.098903,5.428424,6.056302,5.593942, 10.869635,12.425793,5.601889,6.498187,6.967503,5.358961,3.519295, 7.137202,19.121631,6.479928)) 问题是指数取代大于约709的任何数都会得到一个大于最大值的数字,可以存储为双精度浮点值 ... -
指数拟合与Scipy.Optimise Curve_fit无法正常工作(Exponential Fitting with Scipy.Optimise Curve_fit not working)[2022-03-06]
我清理了你的代码,让时间从0开始,我用指数函数做了不止一次,以使它们正常工作: import matplotlib.pyplot as plt import numpy as np from scipy.optimize import curve_fit time, temp = np.loadtxt('test.txt', unpack=True) # Newton cooling law fitting time -= time.min() def TEMP_FIT(t, T0, k, Troom) ... -
显而易见的事情是从data删除NaN。 但是,这样做还需要删除2D X , Y位置数组中的相应位置: X, Y = np.indices(data.shape) mask = ~np.isnan(data) x = X[mask] y = Y[mask] data = data[mask] 现在,您可以使用optimize.leastsq (或更新,更简单的optimize.curve_fit )将数据拟合到模型函数: p, success = optimize.leastsq(errorfunction ...
-
由于以下两个原因,你会感到不适合: 您的模型不适合您的数据。 适合数字病态。 你可以改进模型的数值条件,将常数A移动到指数exp(-K*(t - t_0)) + C 你得到这种方式的结果与指数模型一样好。 在日志空间中拟合也无济于事(只要你的模型中有加性常数C ,它就不是一个真正的选项)。 You get a bad fit because of two reasons: Your model isn't a particular good fit for your data. The fit is num ...