Spark groupByKey澄清(Spark groupByKey Clarification)
我正在尝试处理一些数据并以一种方式写输出,结果由一个键分区,并按另一个参数 - 比如ASC排序。 例如,
>>> data =sc.parallelize(range(10000)) >>> mapped = data.map(lambda x: (x%2,x)) >>> grouped = mapped.groupByKey().partitionBy(2).map(lambda x: x[1] ).saveAsTextFile("mymr-output") $ hadoop fs -cat mymr-output/part-00000 |cut -c1-1000 [0, 2, 4, 6, 8, 10, 12, 14, 16, 18, 20, 22, 24, 26, 28, 30, 32, 34, 36, 38, 40, 42, 44, 46, 48, 50, 52, 54, 56, 58, 60, 62, 64, 66, 68, 70, 72, 74, 76, 78, 80, 82, 84, 86, 88, 90, 92, 94, 96, 98, 100, 102, 104, 106, 108, 110, 112, 114, 116, 118, 120, 122, 124, 126, 128, 130, 132, 134, 136, 138, 140, 142, 144, 146, 148, 150, 152, 154, 156, 158, 160, 162, 164, 166, 168, 170, 172, 174, 176, 178, 180, 182, 184, 186, 188, 190, 192, 194, 196, 198, 200, 202, 204, 206, 208, 210, 212, 214, 216, 218, 220, 222, 224, 226, 228, 230, 232, 234, 236, 238, 240, 242, 244, 246, 248, 250, 252, 254, 256, 258, 260, 262, 264, 266, 268, 270, 272, 274, 276, 278, 280, 282, 284, 286, 288, 290, 292, 294, 296, 298, 300, 302, 304, 306, 308, 310, 312, 314, 316, 318, 320, 322, 324, 326, 328, 330, 332, 334, 336, 338, 340, 342, 344, 346, 348, 350, 352, 354, 356, 358, 360, 362, 364, 366, 368, 370, 372, 374, 376, 378, 380, 382, 384, 386, 388, 390, 392, 394, 396, 398, 400, 402, 404, 406, 408, 410, 412, 414, 416, 418, 420, $ hadoop fs -cat mymr-output/part-00001 |cut -c1-1000 [2049, 2051, 2053, 2055, 2057, 2059, 2061, 2063, 2065, 2067, 2069, 2071, 2073, 2075, 2077, 2079, 2081, 2083, 2085, 2087, 2089, 2091, 2093, 2095, 2097, 2099, 2101, 2103, 2105, 2107, 2109, 2111, 2113, 2115, 2117, 2119, 2121, 2123, 2125, 2127, 2129, 2131, 2133, 2135, 2137, 2139, 2141, 2143, 2145, 2147, 2149, 2151, 2153, 2155, 2157, 2159, 2161, 2163, 2165, 2167, 2169, 2171, 2173, 2175, 2177, 2179, 2181, 2183, 2185, 2187, 2189, 2191, 2193, 2195, 2197, 2199, 2201, 2203, 2205, 2207, 2209, 2211, 2213, 2215, 2217, 2219, 2221, 2223, 2225, 2227, 2229, 2231, 2233, 2235, 2237, 2239, 2241, 2243, 2245, 2247, 2249, 2251, 2253, 2255, 2257, 2259, 2261, 2263, 2265, 2267, 2269, 2271, 2273, 2275, 2277, 2279, 2281, 2283, 2285, 2287, 2289, 2291, 2293, 2295, 2297, 2299, 2301, 2303, 2305, 2307, 2309, 2311, 2313, 2315, 2317, 2319, 2321, 2323, 2325, 2327, 2329, 2331, 2333, 2335, 2337, 2339, 2341, 2343, 2345, 2347, 2349, 2351, 2353, 2355, 2357, 2359, 2361, 2363, 2365, 2367, 2369, 2371, 2373, 2375, 2377, 2379, 238 $
哪个是完美的 - 满足我的第一个标准,即按键分割结果。 但我希望结果排序。 我尝试了sorted(),但它没有用。
>>> grouped= sorted(mapped.groupByKey().partitionBy(2).map(lambda x: x[1] )) Traceback (most recent call last): File "<stdin>", line 1, in <module> TypeError: 'PipelinedRDD' object is not iterable
我不想再次使用并行化,并且递归。 任何帮助将不胜感激。
PS:我确实经历过: Spark中的groupByKey是否保留原始顺序? 但它没有帮助。 谢谢,Jeevan。
I am trying to process some data and write the output in such a way that the result is partitioned by a key, and is sorted by another parameter- say ASC. For example,
>>> data =sc.parallelize(range(10000)) >>> mapped = data.map(lambda x: (x%2,x)) >>> grouped = mapped.groupByKey().partitionBy(2).map(lambda x: x[1] ).saveAsTextFile("mymr-output") $ hadoop fs -cat mymr-output/part-00000 |cut -c1-1000 [0, 2, 4, 6, 8, 10, 12, 14, 16, 18, 20, 22, 24, 26, 28, 30, 32, 34, 36, 38, 40, 42, 44, 46, 48, 50, 52, 54, 56, 58, 60, 62, 64, 66, 68, 70, 72, 74, 76, 78, 80, 82, 84, 86, 88, 90, 92, 94, 96, 98, 100, 102, 104, 106, 108, 110, 112, 114, 116, 118, 120, 122, 124, 126, 128, 130, 132, 134, 136, 138, 140, 142, 144, 146, 148, 150, 152, 154, 156, 158, 160, 162, 164, 166, 168, 170, 172, 174, 176, 178, 180, 182, 184, 186, 188, 190, 192, 194, 196, 198, 200, 202, 204, 206, 208, 210, 212, 214, 216, 218, 220, 222, 224, 226, 228, 230, 232, 234, 236, 238, 240, 242, 244, 246, 248, 250, 252, 254, 256, 258, 260, 262, 264, 266, 268, 270, 272, 274, 276, 278, 280, 282, 284, 286, 288, 290, 292, 294, 296, 298, 300, 302, 304, 306, 308, 310, 312, 314, 316, 318, 320, 322, 324, 326, 328, 330, 332, 334, 336, 338, 340, 342, 344, 346, 348, 350, 352, 354, 356, 358, 360, 362, 364, 366, 368, 370, 372, 374, 376, 378, 380, 382, 384, 386, 388, 390, 392, 394, 396, 398, 400, 402, 404, 406, 408, 410, 412, 414, 416, 418, 420, $ hadoop fs -cat mymr-output/part-00001 |cut -c1-1000 [2049, 2051, 2053, 2055, 2057, 2059, 2061, 2063, 2065, 2067, 2069, 2071, 2073, 2075, 2077, 2079, 2081, 2083, 2085, 2087, 2089, 2091, 2093, 2095, 2097, 2099, 2101, 2103, 2105, 2107, 2109, 2111, 2113, 2115, 2117, 2119, 2121, 2123, 2125, 2127, 2129, 2131, 2133, 2135, 2137, 2139, 2141, 2143, 2145, 2147, 2149, 2151, 2153, 2155, 2157, 2159, 2161, 2163, 2165, 2167, 2169, 2171, 2173, 2175, 2177, 2179, 2181, 2183, 2185, 2187, 2189, 2191, 2193, 2195, 2197, 2199, 2201, 2203, 2205, 2207, 2209, 2211, 2213, 2215, 2217, 2219, 2221, 2223, 2225, 2227, 2229, 2231, 2233, 2235, 2237, 2239, 2241, 2243, 2245, 2247, 2249, 2251, 2253, 2255, 2257, 2259, 2261, 2263, 2265, 2267, 2269, 2271, 2273, 2275, 2277, 2279, 2281, 2283, 2285, 2287, 2289, 2291, 2293, 2295, 2297, 2299, 2301, 2303, 2305, 2307, 2309, 2311, 2313, 2315, 2317, 2319, 2321, 2323, 2325, 2327, 2329, 2331, 2333, 2335, 2337, 2339, 2341, 2343, 2345, 2347, 2349, 2351, 2353, 2355, 2357, 2359, 2361, 2363, 2365, 2367, 2369, 2371, 2373, 2375, 2377, 2379, 238 $
Which is perfect- satisfies my first criteria, which is to have results partitioned by key. But I want the result sorted. I tried sorted(), but it didn't work.
>>> grouped= sorted(mapped.groupByKey().partitionBy(2).map(lambda x: x[1] )) Traceback (most recent call last): File "<stdin>", line 1, in <module> TypeError: 'PipelinedRDD' object is not iterable
I don't want to use parallelize again, and go recursive. Any help would be greatly appreciated.
PS: I did go through this: Does groupByKey in Spark preserve the original order? but it didn't help. Thanks, Jeevan.
原文:https://stackoverflow.com/questions/25653188
最满意答案
public static void main(String[] args) { int x = 2, y = 4, z = 0, q = 0; while (z < y) { while (z < x) { System.out.print(z + " "); z++; } if(q == 0){ z = 0; q++; } System.out.print(z + " "); z++; } System.out.print(z + " "); }
这样的事情可能就是你想要的。 因为crush说你不使用q所以当你到达外部循环时创建一个if语句来键入一个值(即q)以将z重置为0.然后增加q以防止if语句被重用,从而允许循环到按预期结束。
当我跑这个时,我得到0 1 0 1 2 3 4
public static void main(String[] args) { int x = 2, y = 4, z = 0, q = 0; while (z < y) { while (z < x) { System.out.print(z + " "); z++; } if(q == 0){ z = 0; q++; } System.out.print(z + " "); z++; } System.out.print(z + " "); }
Something like this might be what you are going for. As crush says you don't use q so when you get to the outer loop create an if statement that keys off a value (i.e q) to reset z to 0. Then increment q preventing the if statement being reused thus allowing the loop to end as intended.
When I ran this I got 0 1 0 1 2 3 4
相关问答
更多-
你差不多完成了,你只需要在每次迭代时增加begin 。 train_station=['Amsterdam-Central','Amsterdam-Amstel','Utrecht'] begin = 0 for x in train_station: print("Current station is: "+x) print("Stations to go: ") begin += 1 for y in range(begin, 3, 1): prin ...
-
public static void main(String[] args) { int x = 2, y = 4, z = 0, q = 0; while (z < y) { while (z < x) { System.out.print(z + " "); z++; } if(q == 0){ z = 0; q++; } ...
-
我现在明白了。 问题可能是Range("I"&j).Value与未指定的工作表。 在我用CWData指定它CWData ,它可以工作。 课程:使用不同工作表中的范围时,请确保使用正确的工作表名称为所有函数添加前缀。 For i = 2 To 10 '' Start the counters fresh for each new row. Ensures that '' there's no double counting. ticCount = 0 ticAlert = 0 ...
-
素数计数器从使用方法变为使用java中的嵌套循环(Prime number counter changing from using methods to using a nested loop in java)[2022-03-11]
使用以下命令更改现有的for循环: for (int number = 2; count <= limit; number++) { // print prime numbers only boolean isPrime = true; for (int i = 2; i < number; i++) { if (number % i == 0) { isPrime = false; // num ... -
你的问题是你在$counter等于20点击$k -eq 0检查,但你仍然增加计数器,然后一旦你超过20你就再也无法击中它。 您可以通过将set -x添加到脚本顶部并运行它并查看输出来查看。 你不希望if / else用于不相关的检查它有这个问题。 你可能想要更像这样的东西作为你的循环体。 if [ $k -eq 0 ]; then stream=${stream}@ fi stream=${stream}_${syllarray[k]} if [ $counter -eq 20 ]; then ...
-
var部分已被解释为jinja变量/表达式。 因此,您不能在其中放置带花括号的变量。 此外,您不能将表达式嵌套在一起。 {{ foo {{ bar }} }}语法无效。 这应该工作: debug: msg="{{ output.results[item].stdout.split('|').1 }}" ... shell: echo neutron port-update {{ output.results[item].stdout.split("|").1 }} --no-security-groups ...
-
Jmeter - while循环中的循环计数器在退出时不复位(Jmeter - Loop counter in while loop not resetting on exit)[2022-03-24]
如果您的“迭代”来自线程组 ,请执行以下操作: 如果您的“迭代”不是由Thread Group驱动并来自ie Loop Controller,您可以通过以下Groovy脚本重置计数器: vars.put('Counter', '1') 您可以在任何JSR223测试元素或自JMeter 3.1以来可用的__groovy()函数中使用上述脚本。 在后一种情况下,您需要转义逗号,如: ${__groovy(vars.put('Counter'\, '1'),)} If your "iterations" c ... -
SET LOCAL应该是SETLOCAL 。 这是一个单一的命令。 嵌套变量也应该用!var!引用!var! 而不是%var% 。 如果使用%var% ,它将使用外部范围(并且无法正常工作)。 https://ss64.com/nt/delayedexpansion.html @ECHO OFF REM "SETLOCAL" not "SET LOCAL" SETLOCAL enabledelayedexpansion SET counter=0 FOR /L %%a IN (1,1,10) DO ( ...
-
在Java中声明最深嵌套循环的计数器的位置?(Where to declare the counter of the most deeply nested loop in Java?)[2023-07-31]
不,它没有,并且最好在尽可能有限的范围内声明变量 - 它使可读性更加清晰。 它还确保您不会在下一个开始时意外地使用一个循环结束时的值。 (就我个人而言,我要警惕任何人强迫你使用绝对最小的内存量而不是专注于可读性和测量优化的重要性。) No, it doesn't, and it's good practice to declare the variable in as limited a scope as possible - it makes it clearer for readability. It ... -
如果您的数据是: [ {innerdata: ['foo', 'bar']}, {innerdata: ['foo', 'bar']} ] 然后它按预期工作,因为它是一个索引而不是一个计数器。 {{#each item in data}} Outer Index: {{index_1}}
{{#eachIndexed record in item.innerdata}} Inner Index: {{index_1}}
{{/eachIndex ...