发表文章

[Python] armv8 工作失败, 因为正式发布 Stretch armv8 jobs fail on apt since the official release[ros_buildfarm]

mikaelarguedas 2017-10-9 56

在过去的几天里, armv8 的工作一直在不断失败。它似乎是刚刚出现后伸展释放。我没有研究这个问题, 所以我打开这个跟踪它

�[0mInvoking 'apt-get update'
Hit:1 http://repositories.ros.org/ubuntu/building stretch InRelease
Ign:2 http://cdn-fastly.deb.debian.org/debian stretch InRelease
Ign:3 http://cdn-fastly.deb.debian.org/debian stretch InRelease
Hit:4 http://cdn-fastly.deb.debian.org/debian stretch Release
Hit:5 http://cdn-fastly.deb.debian.org/debian stretch Release
Err:5 http://cdn-fastly.deb.debian.org/debian stretch Release
  Failed to stat - stat (2: No such file or directory)
Reading package lists...
E: The repository 'http://deb.debian.org/debian stretch Release' does no longer have a Release file.
Invocation failed without any known error condition, printing all lines to debug known error detection:
  1 'Hit:1 http://repositories.ros.org/ubuntu/building stretch InRelease'
  2 'Ign:2 http://cdn-fastly.deb.debian.org/debian stretch InRelease'
  3 'Ign:3 http://cdn-fastly.deb.debian.org/debian stretch InRelease'
  4 'Hit:4 http://cdn-fastly.deb.debian.org/debian stretch Release'
  5 'Hit:5 http://cdn-fastly.deb.debian.org/debian stretch Release'
  6 'Err:5 http://cdn-fastly.deb.debian.org/debian stretch Release'
  7 '  Failed to stat - stat (2: No such file or directory)'
  8 'Reading package lists...'
  9 'E: The repository 'http://deb.debian.org/debian stretch Release' does no longer have a Release file.'
None of the following known errors were detected:
  1 'Failed to fetch'
  2 'Hash Sum mismatch'
  3 'Unable to locate package'
  4 'is not what the server reported'

失败作业的示例: http://build.ros.org/view/Lbin_dsv8_dSv8/job/Lbin_dsv8_dSv8__marti_data_structures__debian_stretch_arm64__binary/3/console

原文:

Stretch armv8 jobs have been consistently failing in the past few days. It seemed to have appeared just after Stretch was released. I didn't look into the issue yet and thus I'm opening this for tracking it

�[0mInvoking 'apt-get update'
Hit:1 http://repositories.ros.org/ubuntu/building stretch InRelease
Ign:2 http://cdn-fastly.deb.debian.org/debian stretch InRelease
Ign:3 http://cdn-fastly.deb.debian.org/debian stretch InRelease
Hit:4 http://cdn-fastly.deb.debian.org/debian stretch Release
Hit:5 http://cdn-fastly.deb.debian.org/debian stretch Release
Err:5 http://cdn-fastly.deb.debian.org/debian stretch Release
  Failed to stat - stat (2: No such file or directory)
Reading package lists...
E: The repository 'http://deb.debian.org/debian stretch Release' does no longer have a Release file.
Invocation failed without any known error condition, printing all lines to debug known error detection:
  1 'Hit:1 http://repositories.ros.org/ubuntu/building stretch InRelease'
  2 'Ign:2 http://cdn-fastly.deb.debian.org/debian stretch InRelease'
  3 'Ign:3 http://cdn-fastly.deb.debian.org/debian stretch InRelease'
  4 'Hit:4 http://cdn-fastly.deb.debian.org/debian stretch Release'
  5 'Hit:5 http://cdn-fastly.deb.debian.org/debian stretch Release'
  6 'Err:5 http://cdn-fastly.deb.debian.org/debian stretch Release'
  7 '  Failed to stat - stat (2: No such file or directory)'
  8 'Reading package lists...'
  9 'E: The repository 'http://deb.debian.org/debian stretch Release' does no longer have a Release file.'
None of the following known errors were detected:
  1 'Failed to fetch'
  2 'Hash Sum mismatch'
  3 'Unable to locate package'
  4 'is not what the server reported'

Example of failing job: http://build.ros.org/view/Lbin_dsv8_dSv8/job/Lbin_dsv8_dSv8__marti_data_structures__debian_stretch_arm64__binary/3/console

相关推荐
最新评论 (20)
tfoote 2017-10-9
1

这看起来像服务器/cdn 的问题。

For: https://deb.debian.org/debian我得到

Not Found

The requested URL /debian was not found on this server.

Apache Server at deb.debian.org Port 443

https://deb.debian.org/debian/重定向到具有真实内容的https://cdn-aws.deb.debian.org/debian/

原文:

This looks like an issue with the server/cdn.

For: https://deb.debian.org/debian I get

Not Found

The requested URL /debian was not found on this server.

Apache Server at deb.debian.org Port 443

https://deb.debian.org/debian/ redirects to https://cdn-aws.deb.debian.org/debian/ which has real content.

sloretz 2017-10-9
2

第二次看到http://build.ros.org/job/Lbin_dsv8_dSv8__urdf_parser_plugin__debian_stretch_arm64__binary/4/console

23:25:44 �[0mInvoking 'apt-get update'
23:25:46 Hit:1 http://repositories.ros.org/ubuntu/building stretch InRelease
23:25:47 Ign:2 http://cdn-fastly.deb.debian.org/debian stretch InRelease
23:25:47 Ign:3 http://cdn-fastly.deb.debian.org/debian stretch InRelease
23:25:47 Hit:4 http://cdn-fastly.deb.debian.org/debian stretch Release
23:25:47 Hit:5 http://cdn-fastly.deb.debian.org/debian stretch Release
23:25:47 Err:5 http://cdn-fastly.deb.debian.org/debian stretch Release
23:25:47   Failed to stat - stat (2: No such file or directory)
23:26:02 Reading package lists...
23:26:02 E: The repository 'http://httpredir.debian.org/debian stretch Release' does no longer have a Release file.
23:26:02 Invocation failed without any known error condition, printing all lines to debug known error detection:
23:26:02   1 'Hit:1 http://repositories.ros.org/ubuntu/building stretch InRelease'
23:26:02   2 'Ign:2 http://cdn-fastly.deb.debian.org/debian stretch InRelease'
23:26:02   3 'Ign:3 http://cdn-fastly.deb.debian.org/debian stretch InRelease'
23:26:02   4 'Hit:4 http://cdn-fastly.deb.debian.org/debian stretch Release'
23:26:02   5 'Hit:5 http://cdn-fastly.deb.debian.org/debian stretch Release'
23:26:02   6 'Err:5 http://cdn-fastly.deb.debian.org/debian stretch Release'
23:26:02   7 '  Failed to stat - stat (2: No such file or directory)'
23:26:02   8 'Reading package lists...'
23:26:02   9 'E: The repository 'http://httpredir.debian.org/debian stretch Release' does no longer have a Release file.'
23:26:02 None of the following known errors were detected:
23:26:02   1 'Failed to fetch'
23:26:02   2 'Hash Sum mismatch'
23:26:02   3 'Unable to locate package'
23:26:02   4 'is not what the server reported'
原文:

Seen a second time http://build.ros.org/job/Lbin_dsv8_dSv8__urdf_parser_plugin__debian_stretch_arm64__binary/4/console

23:25:44 �[0mInvoking 'apt-get update'
23:25:46 Hit:1 http://repositories.ros.org/ubuntu/building stretch InRelease
23:25:47 Ign:2 http://cdn-fastly.deb.debian.org/debian stretch InRelease
23:25:47 Ign:3 http://cdn-fastly.deb.debian.org/debian stretch InRelease
23:25:47 Hit:4 http://cdn-fastly.deb.debian.org/debian stretch Release
23:25:47 Hit:5 http://cdn-fastly.deb.debian.org/debian stretch Release
23:25:47 Err:5 http://cdn-fastly.deb.debian.org/debian stretch Release
23:25:47   Failed to stat - stat (2: No such file or directory)
23:26:02 Reading package lists...
23:26:02 E: The repository 'http://httpredir.debian.org/debian stretch Release' does no longer have a Release file.
23:26:02 Invocation failed without any known error condition, printing all lines to debug known error detection:
23:26:02   1 'Hit:1 http://repositories.ros.org/ubuntu/building stretch InRelease'
23:26:02   2 'Ign:2 http://cdn-fastly.deb.debian.org/debian stretch InRelease'
23:26:02   3 'Ign:3 http://cdn-fastly.deb.debian.org/debian stretch InRelease'
23:26:02   4 'Hit:4 http://cdn-fastly.deb.debian.org/debian stretch Release'
23:26:02   5 'Hit:5 http://cdn-fastly.deb.debian.org/debian stretch Release'
23:26:02   6 'Err:5 http://cdn-fastly.deb.debian.org/debian stretch Release'
23:26:02   7 '  Failed to stat - stat (2: No such file or directory)'
23:26:02   8 'Reading package lists...'
23:26:02   9 'E: The repository 'http://httpredir.debian.org/debian stretch Release' does no longer have a Release file.'
23:26:02 None of the following known errors were detected:
23:26:02   1 'Failed to fetch'
23:26:02   2 'Hash Sum mismatch'
23:26:02   3 'Unable to locate package'
23:26:02   4 'is not what the server reported'
dirkthomas 2017-10-9
3

可能由#458解决。

原文:

Likely addressed by #458.

mikaelarguedas 2017-10-9
4

等待所有作业的重新生成以前未能查看#458是否修复了该问题, 或者是否需要采取任何其他操作

原文:

Waiting for a rebuild of all jobs previously failing to see if #458 fixed the issue or if any other action needs to be taken

mikaelarguedas 2017-10-9
5

虽然从#458以来我们看到的失败少了很多。它看起来像 apt update 在命中 contribnon-free arm64 存储库时仍然相当经常失败。不希望与运行该作业的机器相关, 因为下面的三个设备是在不同的机器上跑的。
http://build.ros.org/view/Lbin_dsv8_dSv8/job/Lbin_dsv8_dSv8__vision_visp__debian_stretch_arm64__binary/6/console
http://build.ros.org/view/Lbin_dsv8_dSv8/job/Lbin_dsv8_dSv8__phidgets_imu__debian_stretch_arm64__binary/7/console
http://build.ros.org/view/Lbin_dsv8_dSv8/job/Lbin_dsv8_dSv8__combined_robot_hw_tests__debian_stretch_arm64__binary/10/console

原文:

While we see a lot less failures since #458 . It looks like apt update still fails fairly often when hitting the contrib and non-free arm64 repositories. Doesn't look to be related to the machine running the job given that the three following ones were ran on different machines.
http://build.ros.org/view/Lbin_dsv8_dSv8/job/Lbin_dsv8_dSv8__vision_visp__debian_stretch_arm64__binary/6/console
http://build.ros.org/view/Lbin_dsv8_dSv8/job/Lbin_dsv8_dSv8__phidgets_imu__debian_stretch_arm64__binary/7/console
http://build.ros.org/view/Lbin_dsv8_dSv8/job/Lbin_dsv8_dSv8__combined_robot_hw_tests__debian_stretch_arm64__binary/10/console

tfoote 2017-10-9
6

如此快速似乎是默认的, 但也有一个 cloudfront 的镜子: https://cloudfront.debian.net/ , 我预计将更快的 EC2, 作为一个亚马逊产品。

原文:

So fastly seems to be the default but there's also a cloudfront mirror: https://cloudfront.debian.net/ which I expect will be faster on EC2, being an Amazon product.

tfoote 2017-10-9
7

我还没有充分的机会看它, 但这似乎是一个很好的资源, 了解新的 cdn 方法。https://debconf16.debconf.org/talks/97/

原文:

I haven't had a chance to watch it fully, but this seems like a good resource to understand the new cdn approach. https://debconf16.debconf.org/talks/97/

mikaelarguedas 2017-10-9
8

我尝试了 cloudfront 镜像的工作, 失败昨晚使用此分支, 似乎两个作业都通过了 (12)。它绝对没有足够的积分来得出任何结论, 但我会继续重新运行失败的工作, 以使用该分支, 看看它是否持续通过。

@tfoote对话昨天我们还在想, 是否在重试之间清除列表有助于 ( apt-get clean && rm -rf /var/lib/apt/lists/ )。我认为做一件事总是好的, 一定要有最新的。但考虑一下, 我不确定这是否会解决获取损坏文件的问题, 因为 apt 必须下载并覆盖它们, 否则, 对 Hash Sum Mismatch 的重试不会对任何作业产生不同的结果。因此, 我的猜测是, 一些 Debian 镜像有不靠谱的 arm 文件, 我们反复下载同样的坏文件。

我还没有充分的机会看它, 但这似乎是一个很好的资源, 了解新的 cdn 方法。https://debconf16.debconf.org/talks/97/

好!我会试着在不久的将来看看。

原文:

I gave a try to cloudfront mirror on the job that fail last night using this branch and it seems that both jobs passed (1 and 2). It's definitely not enough points to draw any conclusion but I'll keep rerunning failing jobs to use that branch to see if it consistently passes.

Talking with @tfoote yesterday we were also wondering if clearing the list between retries would help (apt-get clean && rm -rf /var/lib/apt/lists/). I think it's always good to do to be sure to have the latest. But thinking about it a bit more, I'm not sure this will solve the issue of getting a corrupted file because apt must be downloading and overwriting them otherwise the retry on Hash Sum Mismatch would never have a different outcome on any job. So my guess is that some Debian mirrors have wonky arm files and we repeatedly download the same bad files.

I haven't had a chance to watch it fully, but this seems like a good resource to understand the new cdn approach. https://debconf16.debconf.org/talks/97/

Nice ! I'll try to have a look in the near future.

mikaelarguedas 2017-10-9
9

实际上, cloudfront 看起来很有希望, 通常在同一台机器上 retriggered 的工作失败会继续失败。使用该更改作业以前失败只是在同一台计算机上的通过

原文:

Actually cloudfront looks promising, usually failing jobs retriggered on the same machine would keep failing. With that change a job previously failing just passed on the same machine

dirkthomas 2017-10-9
10

如果重试失败的原因相同, 这也可能是由于我们使用的代理服务器可能不重新资源, 而是一次又一次地返回相同的已损坏结果?

原文:

If the retries fail for the same reason could this also be due to the proxy we are using which might not re-fetch the resource but return the same broken result over and over?

dirkthomas 2017-10-9
11

这个输出实际上是有趣的。它甚至暗示代理可能在这里涉及。压缩输出的外观如下:

  • apt update的第一次调用: "哈希和不匹配" 失败
  • Reinvoke "apt 更新" (2/10) 休眠9秒后: "哈希和不匹配" 失败
  • Reinvoke "apt 更新" (3/10) 休眠11秒后: "哈希和不匹配" 失败
  • Reinvoke "apt 更新" (4/10) 休眠13秒后: "哈希和不匹配" 失败
  • Reinvoke "apt 更新" (5/10) 休眠15秒后: "哈希和不匹配" 失败
  • Reinvoke "apt 更新" (6/10) 睡眠后17秒: 成功

有趣的一点是, 成功的运行发生在最初失败后65秒。因此, 最初的四次重试可能都是从代理服务器返回的, 该文件与第一次调用的哈希总和不匹配。如果是这种情况, 代理行为将破坏重试逻辑。本质上, 我们不会重试10次, 但只有3次。如果是这种情况, 我们应该让代理从缓存中删除该资源, 或者如果不能, 则将每个重试的超时扩展为大于代理的刷新阈值。

原文:

This output is actually interesting. It even suggests that the proxy might be involved here. The compressed output looks like this:

  • First invocation of apt update: fails with "Hash Sum mismatch"
  • Reinvoke 'apt update' (2/10) after sleeping 9 seconds: fails with "Hash Sum mismatch"
  • Reinvoke 'apt update' (3/10) after sleeping 11 seconds: fails with "Hash Sum mismatch"
  • Reinvoke 'apt update' (4/10) after sleeping 13 seconds: fails with "Hash Sum mismatch"
  • Reinvoke 'apt update' (5/10) after sleeping 15 seconds: fails with "Hash Sum mismatch"
  • Reinvoke 'apt update' (6/10) after sleeping 17 seconds: succeeds

The interesting point is that the successful run happens 65 seconds after the initial failure. So it looks like that the first four retries might all be server from the proxy which returns the same file which has the hash sum mismatch from the first invocation. If that is the case the proxy behavior undermines the retry logic. Essentially we are not retrying 10 times but only 3 times. If that is the case we should either make the proxy drop that resource from the cache or if that is not possible extend the timeout for each retry to be bigger than the refresh threshold of the proxy.

tfoote 2017-10-9
12

总计睡眠时间不是一个很好的时间指示器, 它是缓存超时所关心的。失败的更新本身只要休眠就可以。第一个错误发生在:
4:19 在工作中流逝。成功的运行从6:35 开始, 特别是睡眠超时。正如我所提到的, 我们在文件大小上得到不同的结果。72430的第一分钟左右, 然后92209下一分钟。然后它通过。因此, 我们可能会将内容缓存1分钟, 但在这种情况下, 即使在代理刷新之后, 我们仍会连续失败。

这意味着我们可能要考虑放慢我们的重试周期。不幸的是, 鱿鱼只需要分钟的超时, 所以我不知道如何设置它更低。我们还可以尝试找出如何在存储库 web 服务器上设置缓存信息, 但我不相信这是 reprepro 支持的东西, 而且只对我们托管的回购起作用。无论哪种方式, 它都在优化的不良回报值的副作用, 从 apt 回购。

此外, 我检查了这个文件的大小, 预期是 Packages 文件中的压缩托管在这里: http://cdn-fastly.deb.debian.org/debian/dists/stretch/non-free/binary-arm64/Packages.gz

$ ls -l Packages 
-rw-rw-r-- 1 tfoote tfoote 222075 Jul  7 16:23 Packages
tfoote@snowman:/tmp Last: [0] (0s Seconds)
$ md5sum Packages 
b650dc75f778a94560b3e6bbc1bd81a2  Packages

image

该文件已在那里自6月17日伸展的发布日期, 这是大约当我们开始有麻烦。

我们只需要找到有问题的 CDN 节点..。

原文:

Totaling the sleep time isn't a good indicator of elapsed time which is what the cache timeout cares about. The failing updates themselves take as long as the sleeps. The first error happens at:
4:19 elapsed in the job. And the successful run starts at 6:35 notably more that the sleep timeout. And as I mentioned we get different results on the file size. 72430 for the first minute or so, then 92209 for the next minute. Then it passes. So it's likely that we are caching the content for 1 minute, but in this case we're still getting consecutive failures even after the proxy refreshes.

This suggests that we might want to consider slowing down our retries cycling. Unfortunately squid only takes the timeout in minutes so I don't know how to set it lower. And we also could try to figure out how to set cache information on the repository webserver, but I don't believe that's something reprepro supports and that would only work for our hosted repos. And either way it's optimizing the side effects of the bad return values from the apt repo.

Also looking into this I checked and the size of the file expected is the Packages file inside the tarball hosted here: http://cdn-fastly.deb.debian.org/debian/dists/stretch/non-free/binary-arm64/Packages.gz

$ ls -l Packages 
-rw-rw-r-- 1 tfoote tfoote 222075 Jul  7 16:23 Packages
tfoote@snowman:/tmp Last: [0] (0s Seconds)
$ md5sum Packages 
b650dc75f778a94560b3e6bbc1bd81a2  Packages

image

And that file has been there since June 17th Stretch's release date which is approximately when we started having trouble.

We just have to find the CDN node that's having trouble....

mikaelarguedas 2017-10-9
13

当与@dirk-托马斯离线交谈时, 我们认为这可能是一个好主意 (不一定只针对这个特定的问题) 将我们的 "已知错误" 分隔成 "如果代理仍然具有资源" v s "代理不会影响重试 "。这样我们就可以在第一个类别的重试之间使用不同的时间 (添加鱿鱼超时到等待时间), 从而确保重试执行不同的操作。

我同意这仍然部分是为了解决这个特定的问题, 但它将有好处, 以确保重试逻辑实际上是重试, 而不是获取相同的损坏的资源一半的时间。

原文:

When talking with @dirk-thomas offline, we figured that it could be a good idea (not necessarily only for this particular issue) to segregate our "known errors" into "will fail the retry if the proxy still has the resource" v.s "proxy doesn't impact the retries". And this way we could use a different time to sleep between retries for the first category (adding squid timeout to the time to wait?) and thus ensure that the retries are performing a different action.

I agree this is still partly to work around this specific issue but it would have the benefit of ensuring that the retries logic is actually retrying and not fetching the same corrupted resource half of the time.

dirkthomas 2017-10-9
14

为了保持代理的使用透明, 最好始终检查 URL 的上游。基本上是零分钟的门槛, 如果这是可能的。

还有人努力报告这个问题吗?这可能是最好的方法来得到一个坏的镜子固定, 如果这是真正的原因。

原文:

In order to keep the usage of the proxy transparent it would be good to always check the URL upstream. Basically a threshold of zero minutes if that is possible.

Also has anyone made an effort to report this problem? That would probably the best approach to get a broken mirror fixed if that is actually the reason.

tfoote 2017-10-9
15

我找不到一个快速 CDN 的联系点, 所以我问他谁是 cloudfront 镜子的联系人, 如果他可以建立连接或提供任何洞察力。

原文:

I can't find a point of contact for the fastly CDN so I've asked Jeb who's listed as a point of contact for the cloudfront mirror if he can make a connection or provide any insight.

tfoote 2017-10-9
16

我得到了詹姆斯的一些反馈:

对不起, 我不知道有谁在快速寻找。我看到了 CloudFront (AWS) CDN 的上游镜像网络上的一致性错误增加;有些镜像站点 (即上游源服务器) 正在将重定向发送到 CDN, 而不是服务内容。我刚刚拉 cloudfront.debian.net 的形式使用 ftp.debian.org 作为其上游到 ftp.us.debian.org 尝试和减少坐在上游的节点的数量 (这两个是一个服务器池, 但美国池显然比全球池小)。

通常情况下, 缓存目录列表 (自动索引生成的网页) 无法准确地反映该 URL 路径下的文件 (对象) 的信息, 因此, 查看/debian/dists/stretch/non-free/binary-arm64/可能不会显示事实上包. gz 或您正在被服务回来-它可能是一个过时的索引页, 或在包的对象. gz 和包. 可能已经过时了。这些文件需要保持非常新鲜, 我不知道什么快速管理员做这样做。

在 CloudFront CDN 上, 我强制时间 (TTL) 的路径在/debian/'dists'/是非常 short-sometimes 的短至5分钟, 其中默认将是24小时。如果我没有把它设置为5分钟的路径, 那么得到目录列表的请求可能是在一个 CDN 缓存遗漏的情况下进行的, 但下一个获取软件包的请求. 可能是23小时前上传的文件中的缓存命中即使这不是你要找的人有一堆签名/校验文件和23小时的旧可能是过时的。

希望对一些想法有所帮助。使用卷毛或 wget, 并查看您要请求的每个文件的标题, 并检查时间戳和缓存命中/丢失状态 (应在响应的标题中)。

基于这一点, 我建议我们继续使用#461 , 因为詹姆斯正在监视这些问题。

原文:

I got some feedback from James:

Sorry, I don't know anyone who is looking after Fastly. I have been seeing an increase in consistency errors on the mirror network that is the upstream of the CloudFront (AWS) CDN; some mirror sites (ie, upstream origin server(s)) were sending redirects to the CDN instead of serving content. I've just pulled cloudfront.debian.net form using ftp.debian.org as its upstream to ftp.us.debian.org to try and reduce the number of nodes that sit upstream (both are a pool of servers, but the US pool is obviously smaller than the global pool).

Often times the cached directory listing (auto index generated web page) does not accurately reflect the information of the files (objects) that would sit under that URL path, so looking at /debian/dists/stretch/non-free/binary-arm64/ may not show the actualy Packages.gz or xz that you're being served back - it may be an out of date index page, or the object at Packages.gz and Packages.xz may be out of date. These files need to be kept very fresh, and I dont know what Fastly admins are doing to do that.

On the CloudFront CDN, I force the Time to Live (TTL) for paths in /debian/dists/ to be very short - sometimes as short as 5 minutes, where as the default would be 24 hours. If I didn't set it to 5 minutes for that path, then the request to get the directory listing may be done live against an origin in the case of a CDN CACHE MISS, but the next request to get Package.xz may be a CACHE HIT from a file that was uploaded 23 hours ago even through that's not the one you're after. There's a bunch of signed/checksummed files and the one that's 23 hours old may be out of date.

Hope that helps with some ideas. Use curl or wget, and look at the headers of each file you would request, and check the timestamps and cache hit/miss status (should be in the headers of the responses).

Based on that I'd suggest that we go ahead with #461 since James is monitoring theses issues.

mikaelarguedas 2017-10-9
17

现在, #461已合并, 我将关闭此项。

感谢@tfoote@dirk-托马斯的后续工作。如果仍然出现此问题, 我将监视这些作业并重新打开。

原文:

I'm going to close this now that #461 has merged.

Thanks @tfoote and @dirk-thomas for the follow-up on this. I'll monitor the jobs and reopen this if this issue still happens.

mikaelarguedas 2017-10-9
18

后续: 经过全面重建的月球, 我们没有面对这个问题的一个单一的时间, 所以我认为这是安全的假设, 切换到 cloudfront 解决了这个问题

原文:

Follow-up: After a full rebuild of Lunar we didnt face this issue a single time so I'd say it's safe to assume that switching to cloudfront solved this issue

mikaelarguedas 2017-10-9
19

事实上, 我们有几个失败, 由于一个损坏的 deb gnome-icon-theme 在一些快速镜像。注意: 此软件包来自 main 存储库, 因此没有使用在中引入的 cloudfront 镜像#461:

链接到当前受影响的作业:
http://build.ros.org/job/Lbin_dsv8_dSv8__image_cb_detector__debian_stretch_arm64__binary/6/
http://build.ros.org/job/Lbin_dsv8_dSv8__swri_geometry_util__debian_stretch_arm64__binary/4/
http://build.ros.org/view/Lbin_dsv8_dSv8/job/Lbin_dsv8_dSv8__moveit_ros_planning_interface__debian_stretch_arm64__binary/

原文:

Actually we have several failures due to a corrupted deb gnome-icon-theme on some fastly mirrors. Note: this package comes from the main repository so is not using the cloudfront mirrors introduced in #461:

Link to the currently impacted jobs:
http://build.ros.org/job/Lbin_dsv8_dSv8__image_cb_detector__debian_stretch_arm64__binary/6/
http://build.ros.org/job/Lbin_dsv8_dSv8__swri_geometry_util__debian_stretch_arm64__binary/4/
http://build.ros.org/view/Lbin_dsv8_dSv8/job/Lbin_dsv8_dSv8__moveit_ros_planning_interface__debian_stretch_arm64__binary/

mikaelarguedas 2017-10-9
20

#467#468合并后, 未出现任何故障。将再次关闭此任务并继续监视作业

原文:

havent seen any failure since #467 and #468 got merged. Going to close this again and keep monitoring the jobs

返回
发表文章
mikaelarguedas
文章数
1
评论数
9
注册排名
60758