首页 \ 问答 \ QDialog缺少边框(QDialog missing border)

QDialog缺少边框(QDialog missing border)

当我尝试显示QDialog时,它显示在屏幕的左上角,没有边框。 内容正确呈现但边框丢失。

即使我所有类型的WindowHint和WindowType都像Qt :: Widget,Qt :: Dialog或Qt :: WindowTitleHint =>没有任何改变!

感谢任何提示!

编辑 :我尝试与Windows相同的SW和作品(也许ifdef使差异...)


操作系统:Ubuntu 16.04

QT:5.6.1

MyDialog.cpp

MyDialog::MyDialog(MyDialog::MyDialogTypes type, QWidget *parent) :
    QDialog(parent) ,
    ui(new Ui::MyDialog)
{
    ui->setupUi(this);
    setDialogType(type);
}

MainWindow.cpp

bool MainWindow::confirm() 
{
    MyDialog dlg(MyDialog::Type1, this);
    dlg.setWindowTitle("ABC");

    return dlg.exec() != QDialog::Accepted
}

When I try to display a QDialog it appears in the top left corner of my screen with no border. The content is correct rendered but the border is missing.

Even if i all kind of WindowHint's and WindowType's like Qt::Widget, Qt::Dialog or Qt::WindowTitleHint => Nothing changed!

Thanks for any hints!

EDIT: I try the same SW with Windows and in works (maybe a ifdef makes the difference ...)


OS: Ubuntu 16.04

QT: 5.6.1

MyDialog.cpp

MyDialog::MyDialog(MyDialog::MyDialogTypes type, QWidget *parent) :
    QDialog(parent) ,
    ui(new Ui::MyDialog)
{
    ui->setupUi(this);
    setDialogType(type);
}

MainWindow.cpp

bool MainWindow::confirm() 
{
    MyDialog dlg(MyDialog::Type1, this);
    dlg.setWindowTitle("ABC");

    return dlg.exec() != QDialog::Accepted
}

原文:https://stackoverflow.com/questions/40147262
更新时间:2022-05-16 22:05

最满意答案

在这种情况下,您应该分析您的代码 (以查看哪些函数调用占用的时间最多),这样您可以凭经验检查read_csv的确比较慢而不是其他地方...

从查看你的代码:首先,这里有很多复制和很多循环(没有足够的矢量化)...每次你看到循环寻找一种方法来删除它。 其次,当你使用像zfill这样的东西时,我想知道你是否想要to_fwf (固定宽度格式)而不是to_csv

一些健全性测试:有些文件是否比其他文件大得多(这可能会导致您进行交换)? 你确定最大的文件只有1200行吗? 你检查过这个吗? 例如使用wc -l

IMO我认为它不太可能是垃圾收集..(正如在另一个答案中所建议的那样)。


以下是对代码的一些改进,这些改进应该可以改善运行时。

列是固定的我将提取列计算并矢量化真实,子和其他规范化。 使用apply而不是iterating(对于zfill)。

columns_to_drop = set(head) & set(exclude)  # maybe also - ['ConcatIndex']
remaining_cols = set(head) - set(exclude)
real_cols = [r for r in remaining_cols if 'Real ' in r]
real_cols_suffix = [r.strip('Real ') for r in real]
remaining_cols = remaining_cols - real_cols
child_cols = [r for r in remaining_cols if 'child' in r]
child_cols_desc = [r.strip('child'+'desc') for r in real]
remaining_cols = remaining_cols - child_cols

for count, picklefile in enumerate(pickleFiles):
    if count % 100 == 0:
        t2 = datetime.now()
        print(str(t2))
        print('count = ' + str(count))
        print('time: ' + str(t2 - t1) + '\n')
        t1 = t2

    #DataFrame Manipulation:
    df = pd.read_pickle(path + picklefile)

    df['ConcatIndex'] = 100000*df.FileID + df.ID
    # use apply here rather than iterating
    df['Concatenated String Index'] = df['ConcatIndex'].apply(lambda x: str(x).zfill(10))
    df.index = df.ConcatIndex

    #DataFrame Normalization:
    dftemp = df.very_deep_copy()  # don't *think* you need this

    # drop all excludes
    dftemp.drop(columns_to_drop), axis=1, inplace=True)

    # normalize real cols
    m = dftemp[real_cols_suffix].max()
    m.index = real_cols
    dftemp[real_cols] = dftemp[real_cols] / m

    # normalize child cols
    m = dftemp[child_cols_desc].max()
    m.index = child_cols
    dftemp[child_cols] = dftemp[child_cols] / m

    # normalize remaining
    remaining = list(remaining - child)
    dftemp[remaining] = dftemp[remaining] / dftemp[remaining].max()

    # if this case is important then discard the rows of m with .max() is 0
    #if max != 0:
    #    dftemp[string] = dftemp[string]/max

    # this is dropped earlier, if you need it, then subtract ['ConcatIndex'] from columns_to_drop
    # dftemp.drop('ConcatIndex', axis=1, inplace=True)

    #Saving DataFrame in CSV:
    if picklefile == '0000.p':
        dftemp.to_csv(finalnormCSVFile)
    else:
        dftemp.to_csv(finalnormCSVFile, mode='a', header=False)

作为一种风格,我可能会选择将这些部分包装成函数,这也意味着如果确实存在问题,可以使用更多的东西......


另一个更快的选择是使用pytables(HDF5Store),如果你不需要将结果输出为csv(但我希望你这样做)......

到目前为止 ,最好的办法是分析您的代码。 例如,在ipython中使用%prun ,例如,请参阅http://pynash.org/2013/03/06/timing-and-profiling.html 然后你可以看到它绝对是read_csv ,具体在哪里(你的代码行和pandas代码行)。


啊哈,我错过了你将所有这些附加到单个 csv文件中。 在你的修剪中,它显示大部分时间是在close时花的,所以让我们保持文件打开:

# outside of the for loop (so the file is opened and closed only once)
f = open(finalnormCSVFile, 'w')

...
for picklefile in ...

    if picklefile == '0000.p':
        dftemp.to_csv(f)
    else:
        dftemp.to_csv(f, mode='a', header=False)
...

f.close()

每次打开文件之前它都可以追加,它需要在写入之前寻找到底,这可能是昂贵的(我不明白为什么这应该是那么糟糕,但保持开放消除了需要去做这个)。


In these kind of situation you should profile your code (to see which function calls are taking the most time), that way you can check empirically that it is indeed slow in the read_csv rather than elsewhere...

From looking at your code: Firstly there's a lot of copying here and a lot of looping (not enough vectorization)... everytime you see looping look for a way to remove it. Secondly, when you use things like zfill, I wonder if you want to_fwf (fixed width format) rather than to_csv?

Some sanity testing: Are some files are significantly bigger than others (which could lead to you hitting swap)? Are you sure the largest files are only 1200 rows?? Have your checked this? e.g. using wc -l.

IMO I think it unlikely to be garbage collection.. (as was suggested in the other answer).


Here are a few improvements on your code, which should improve the runtime.

Columns are fixed I would extract the column calculations and vectorize the real, child and other normalizations. Use apply rather than iterating (for zfill).

columns_to_drop = set(head) & set(exclude)  # maybe also - ['ConcatIndex']
remaining_cols = set(head) - set(exclude)
real_cols = [r for r in remaining_cols if 'Real ' in r]
real_cols_suffix = [r.strip('Real ') for r in real]
remaining_cols = remaining_cols - real_cols
child_cols = [r for r in remaining_cols if 'child' in r]
child_cols_desc = [r.strip('child'+'desc') for r in real]
remaining_cols = remaining_cols - child_cols

for count, picklefile in enumerate(pickleFiles):
    if count % 100 == 0:
        t2 = datetime.now()
        print(str(t2))
        print('count = ' + str(count))
        print('time: ' + str(t2 - t1) + '\n')
        t1 = t2

    #DataFrame Manipulation:
    df = pd.read_pickle(path + picklefile)

    df['ConcatIndex'] = 100000*df.FileID + df.ID
    # use apply here rather than iterating
    df['Concatenated String Index'] = df['ConcatIndex'].apply(lambda x: str(x).zfill(10))
    df.index = df.ConcatIndex

    #DataFrame Normalization:
    dftemp = df.very_deep_copy()  # don't *think* you need this

    # drop all excludes
    dftemp.drop(columns_to_drop), axis=1, inplace=True)

    # normalize real cols
    m = dftemp[real_cols_suffix].max()
    m.index = real_cols
    dftemp[real_cols] = dftemp[real_cols] / m

    # normalize child cols
    m = dftemp[child_cols_desc].max()
    m.index = child_cols
    dftemp[child_cols] = dftemp[child_cols] / m

    # normalize remaining
    remaining = list(remaining - child)
    dftemp[remaining] = dftemp[remaining] / dftemp[remaining].max()

    # if this case is important then discard the rows of m with .max() is 0
    #if max != 0:
    #    dftemp[string] = dftemp[string]/max

    # this is dropped earlier, if you need it, then subtract ['ConcatIndex'] from columns_to_drop
    # dftemp.drop('ConcatIndex', axis=1, inplace=True)

    #Saving DataFrame in CSV:
    if picklefile == '0000.p':
        dftemp.to_csv(finalnormCSVFile)
    else:
        dftemp.to_csv(finalnormCSVFile, mode='a', header=False)

As a point of style I would probably choose to wrap each of these parts into functions, this will also mean more things can be gc'd if that really was the issue...


Another options which would be faster is to use pytables (HDF5Store) if you didn't need to resulting output to be csv (but I expect you do)...

The best thing to do by far is to profile your code. e.g. with %prun in ipython e.g. see http://pynash.org/2013/03/06/timing-and-profiling.html. Then you can see it definitely is read_csv and specifically where (which line of your code and which lines of pandas code).


Ah ha, I'd missed that you are appending all these to a single csv file. And in your prun it shows most of the time is spent in close, so let's keep the file open:

# outside of the for loop (so the file is opened and closed only once)
f = open(finalnormCSVFile, 'w')

...
for picklefile in ...

    if picklefile == '0000.p':
        dftemp.to_csv(f)
    else:
        dftemp.to_csv(f, mode='a', header=False)
...

f.close()

Each time the file is opened before it can append to, it needs to seek to the end before writing, it could be that this is the expensive (I don't see why this should be that bad, but keeping it open removes the need to do this).

相关问答

更多
  • 这里你的错误是假设param index_col=0意味着它不会将你的csv视为具有索引列。 这应该是index_col=None ,实际上这是默认值,所以你可能没有指定它,它会工作: @staticmethod def convert_csv_to_data_frame(csv_buffer_file): data = StringIO(csv_buffer_file) dataframe = DataFrame.from_csv(path=data) # remo ...
  • IIUC: 有几种方法可以做到这一点。 你可以通过传递一个数组list来做到这一点。 pd.DataFrame({'test':[np.array([1 , 2])]}) test 0 [1, 2] 列列的简单推广 df = pd.Series(np.array([[1, 2], [3, 4]]).tolist()).to_frame('test') df test 0 [1, 2] 1 [3, 4] 和np.array列 df = pd.Series(np.arr ...
  • Pandas .read_table()需要指定正确的分隔符,否则它会为每一行返回一个单独的长字符串。 如果数据帧的格式正确,.to_csv()将返回正确的(未加引号的)输出。 Pandas .read_table() needs to have the correct delimiter specified, else it returns a single long string for each row. .to_csv() returns the correct (unquoted) output ...
  • 如果我理解正确 - 你不想循环你的DF: In [185]: df Out[185]: A B C Class 0 1 2 3 0 1 4 5 6 1 2 7 8 9 1 3 10 11 12 0 In [186]: new = df.loc[df['Class']==1] In [187]: new Out[187]: A B C Class 1 4 5 6 1 2 7 ...
  • 问题是concat Dataframe在循环中得到重复的index行,所以最后concat引发错误: PIH 2017-12-31 2016-12-31 2015-12-31 netIncome 294000.0 11000.0 -1673000.0 netIncome 294000.0 11000.0 -1673000.0 AAPL 2017-09-30 2016-09-24 2015-09-26 netInc ...
  • 我猜测转换为csv会在本机编码中输出一个字符串,然后将其转换为请求的编码,如果两者相同,则会导致不必要的开销。 在源代码中查看此特定行 ,如果编码不是None,则即使对于ascii,它也使用unicode格式化程序。 如果你需要unicode,那么使用python 2.7会比普通的ascii慢一点。 仍然在我的情况下,使用Python 2.7.9-r2 64位和pandas 0.16.1-r1,我只得到这些选项之间的因子2的差异,而不是你获得的因子10, In [1]: x = pd.DataFrame(n ...
  • 在这种情况下,您应该分析您的代码 (以查看哪些函数调用占用的时间最多),这样您可以凭经验检查read_csv的确比较慢而不是其他地方... 从查看你的代码:首先,这里有很多复制和很多循环(没有足够的矢量化)...每次你看到循环寻找一种方法来删除它。 其次,当你使用像zfill这样的东西时,我想知道你是否想要to_fwf (固定宽度格式)而不是to_csv ? 一些健全性测试:有些文件是否比其他文件大得多(这可能会导致您进行交换)? 你确定最大的文件只有1200行吗? 你检查过这个吗? 例如使用wc -l 。 ...
  • 免责声明:由于缺乏声誉点,我不能发表评论 通常, append未使用到位。 因此,我建议说 df_destination = df_destination.append(df_newlines, ignore_index=True) 希望就是这样。 除此之外,我建议使用os.walk和fnmatch来浏览文件。 Disclaimer: Due to a lack of reputation points, I am not allowed to comment Normally, append is no ...
  • 你可以这样做.. import numpy as np import pandas as pd from io import StringIO import csv #random dataframe df = pd.DataFrame(np.random.randn(3,4)) buffer = StringIO() #creating an empty buffer df.to_csv(buffer) #filling that buffer buffer.seek(0) #set to the ...
  • 你可以使用groupby with unstack : def f(x): return (pd.DataFrame(np.sort(x.values.ravel()))) df = df.groupby('Index')['Param1','Param2'].apply(f).unstack() df.columns = df.columns.droplevel(0) print (df) 0 1 2 3 Index A ...

相关文章

更多

最新问答

更多
  • 散列包括方法和/或嵌套属性(Hash include methods and/or nested attributes)
  • TensorFlow:基于索引列表创建新张量(TensorFlow: Create a new tensor based on list of indices)
  • 企业安全培训的各项内容
  • 错误:RPC失败;(error: RPC failed; curl transfer closed with outstanding read data remaining)
  • NumPy:将int64值存储在np.array中并使用dtype float64并将其转换回整数是否安全?(NumPy: Is it safe to store an int64 value in an np.array with dtype float64 and later convert it back to integer?)
  • 注销后如何隐藏导航portlet?(How to hide navigation portlet after logout?)
  • 将多个行和可变行移动到列(moving multiple and variable rows to columns)
  • 对setOnInfoWindowClickListener的意图(Intent on setOnInfoWindowClickListener)
  • Angular $资源不会改变方法(Angular $resource doesn't change method)
  • 如何配置Composite C1以将.m和桌面作为同一站点提供服务(How to configure Composite C1 to serve .m and desktop as the same site)
  • 不适用:悬停在悬停时:在元素之前[复制](Don't apply :hover when hovering on :before element [duplicate])
  • Mysql DB单个字段匹配多个其他字段(Mysql DB single field matching to multiple other fields)
  • 产品页面上的Magento Up出售对齐问题(Magento Up sell alignment issue on the products page)
  • 是否可以嵌套hazelcast IMaps?(Is it possible to nest hazelcast IMaps? And whick side effects can I expect? Is it a good Idea anyway?)
  • UIViewAnimationOptionRepeat在两个动画之间暂停(UIViewAnimationOptionRepeat pausing in between two animations)
  • 在x-kendo-template中使用Razor查询(Using Razor query within x-kendo-template)
  • 在BeautifulSoup中替换文本而不转义(Replace text without escaping in BeautifulSoup)
  • 如何在存根或模拟不存在的方法时配置Rspec以引发错误?(How can I configure Rspec to raise error when stubbing or mocking non-existing methods?)
  • asp用javascript(asp with javascript)
  • “%()s”在sql查询中的含义是什么?(What does “%()s” means in sql query?)
  • 如何为其编辑的内容提供自定义UITableViewCell上下文?(How to give a custom UITableViewCell context of what it is editing?)
  • c ++十进制到二进制,然后使用操作,然后回到十进制(c++ Decimal to binary, then use operation, then back to decimal)
  • 以编程方式创建视频?(Create videos programmatically?)
  • 无法在BeautifulSoup中正确解析数据(Unable to parse data correctly in BeautifulSoup)
  • webform和mvc的区别 知乎
  • 如何使用wadl2java生成REST服务模板,其中POST / PUT方法具有参数?(How do you generate REST service template with wadl2java where POST/PUT methods have parameters?)
  • 我无法理解我的travis构建有什么问题(I am having trouble understanding what is wrong with my travis build)
  • iOS9 Scope Bar出现在Search Bar后面或旁边(iOS9 Scope Bar appears either behind or beside Search Bar)
  • 为什么开机慢上面还显示;Inetrnet,Explorer
  • 有关调用远程WCF服务的超时问题(Timeout Question about Invoking a Remote WCF Service)