发表文章

[Python] TensorBoard 大师的顶峰工程 TensorBoard Master's capstone project[tensorboard]

chrisranderson 2017-10-9 87

我有大约120-150 小时的工作, 在一个项目的学校, 我想做一个可视化项目在 TensorBoard, 我想它可以被许多人使用。我的想法是:

用户从下列选项之一中进行选择以查看:

  • 参数值 (每层基础上规范化)
  • 渐变震级
  • 激活
  • 以上时间的方差
  • 也许其他的东西, 如突出在悬停的接受领域, 绘制图像输入, 并看到如何激活改变

这些将被拉出的网络, 重塑, 并像这样可视化 (可能是一个单独的区域为第一的传送带层过滤器) (视频在这里: https://www.youtube.com/watch?v=gjXmacaxlYI):
image

我有一些问题:

  1. 这样做之前, 在某种意义上, 是为许多人使用?我找不到任何东西, 从一些谷歌搜索周围, 只是人们为他们的项目做一次性的事情。
  2. 这是否可以在模型运行时可视化, 就像每次迭代一样快速而没有急剧的减速?例如, 使用100万参数模型, 其中的参数被重塑为一个1000x1000 正方形。
  3. 你有什么功能的建议, 你会觉得有用的, 我可以添加?是否有其他的可视化工具, 我应该建立, 而不是?
  4. 我应该在 Python 中构建一个 GUI 还是一个独立的 JS 应用程序, 而不是与 TensorBoard 集成?
原文:

I have about 120-150 hours to work on a project for school, and I was thinking about doing a visualization project in TensorBoard, and I'd like it to be usable by lots of people. Here's my idea:

Users select from one of the following to view:

  • parameter values (normalized on a per-layer basis)
  • gradient magnitudes
  • activations
  • variance over time for the above
  • maybe other stuff like highlighting receptive fields on hover, drawing on image input and seeing how activations change

These would just be pulled out of the network, reshaped, and visualized like so (and maybe a separate area for 1st conv layer filters) (video here: https://www.youtube.com/watch?v=gjXmacaxlYI):
image

I have some questions:

  1. Has this been done before in a way that is meant for lots of people to use? I couldn't find anything from a bit of Googling around, just people doing one-off things for their projects.
  2. Can this be visualized as the model is running, as fast as every iteration without drastic slowdowns? For instance, using a 1 million parameter model where the parameters are reshaped into a 1000x1000 square.
  3. Do you have any suggestions of features that you would find useful that I could add? Is there some other visualization tool that I should be building instead?
  4. Should I just build a GUI in Python or a standalone JS app instead of integrating with TensorBoard?
相关推荐
最新评论 (20)
dandelionmane 2017-10-9
1

您好@chrisranderson,

这看起来真酷!如果技术上可行, 我愿意支持这样一个雄心勃勃和有趣的项目。

现在, 每个 TensorBoard 插件都通过摘要系统获取数据。Ie, 他们从事件文件中获取数据, 由摘要写入磁盘。filewriter.这纯粹是 one-way 的通信, 而且它有很高的延迟, 因为 TensorBoard 从事件日志中摄取数据最多每5秒。此外, 所有写有默认情况下永远保持在磁盘上, 因此, 如果我们使用它的高通量通信, 它会很快饱和磁盘。因此, 目前编写的摘要系统不适用于任何 real-time 流应用程序。

@jart正在修改汇总系统以使用 sqlite 并支持数据流, 因此, 我将让她对她是否认为新的摘要系统适合您的应用程序进行详细介绍。

@caisq致力于建立 TensorFlow 和 TensorBoard 之间的直接2路 grpc 通信。如果我们能够利用他的工作, 这将是理想的, 但由于一些依赖关系的问题, 我们很难开源它。

我认为这也是可行的, 你开发自己的系统, 具体的这个插件, 从 TensorFlow 的数据, 以 TensorBoard, 并设置2way 通信。类似以下内容:

让我们假设你的插件被称为 RealTimeParameterVisualizer (也许我们以后会想出一些 catchier)。然后创建一个类 tensorboard.plugins.real_time_parameter.ParameterWriter 。使用指向 logdir 的指针实例化 ParameterWriter。并且用户修改他们的训练代码, 以便每步, 它提供模型重量和梯度对 ParameterWriter。

在实例化时, ParameterWriter 在 logdir 中制作一个目录, 如 logdir/plugins/real_time_parameter 中包含一个名为 mode 的文件, TensorBoard 将使用它来与 ParameterWriter 通信。

在 TensorBoard 后端, 您可以创建 tensorboard.plugins.real_time_parameter.RealTimeParameterPlugin 。RealTimeParameterPlugin 由 TensorBoard 框架给出 logdir, 它负责编写 mode 。因此, 基于用户交互, 可以将模式从 "off" 转变为 "参数值"、"梯度值"、"梯度方差" 等。

ParameterWriter 使用轮询检查来查看 mode 更改的时间。当模式未关闭时, 它开始将数据转储到包含压缩参数数据的文件系统中, 以便前端可视化。我们可以考虑一种适当的方法来确保所使用的磁盘空间量是有界的, 例如, 通过让密码每分钟写入一个新文件, 并让它负责删除 (或像素) 早于10分钟的数据。

在您给出的百万参数模型的例子中, 如果我们要为每个参数显示16值灰度 30 FPS, 那将是 (10 ^ 6 值 * 0.5 字节/值 * 30 每秒) = 15 mb/秒这似乎是合理的写入/读取磁盘和处理, 而不需要太多延迟。或者, 如果我们愿意每秒有1更新, 那么它就只有 500 kb/sec。

@jart / @wchargin请共享您的想法。

现在回答您的问题:

  1. 这样做之前, 在某种意义上, 是为许多人使用?我找不到任何东西, 从一些谷歌搜索周围, 只是人们为他们的项目做一次性的事情。

我不知道任何广泛的可访问的版本。我认为这将是一项新颖的工作, 对社会非常有价值。

  1. 这是否可以在模型运行时可视化, 就像每次迭代一样快速而没有急剧的减速?例如, 使用100万参数模型, 其中的参数被重塑为一个1000x1000 正方形。

请参见上面的讨论。我认为我们可以完成一些对用户来说感觉很快的事情, 而且是近乎 real-time 的。

  1. 你有什么功能的建议, 你会觉得有用的, 我可以添加?是否有其他的可视化工具, 我应该建立, 而不是?

这对我来说好像是个有趣的项目。从技术上讲, 它可以通过将实时组件拿走, 并解决每分钟获取一次数据的时间。但是, 您可以更专注于构建 UI 交互和可视化, 以便真正深入到数据中, 并在上下文中找到解释权重的方法。例如, 为 k 训练示例序列化权重和激活, 并查看不同示例的激活模式是如何不同的。

  1. 我应该在 Python 中构建一个 GUI 还是一个独立的 JS 应用程序, 而不是与 TensorBoard 集成?

你可以自己建立一个 GUI, 这将是你更容易发展, 因为你将控制一切, 并不会受 TensorBoard 的假设限制。然而, 说服人们去发现和使用一个新的工具总是一场艰苦的战斗。我想, 如果这是集成到主线 TensorBoard, 使用将是几个数量级比如果你做一个纯粹的独立的工具。

也可以对@colah@shancarter进行标记, 因为它们可能会提供一些想法。

原文:

Hi @chrisranderson,

This looks really cool! I'd love to support such an ambitious and interesting project if it's technically feasible.

Right now, every TensorBoard plugin gets its data via the summary system. Ie, they get data from event files that are written to disk by the summary.FileWriter. It's purely one-way communication, and it has high latency, because TensorBoard ingests data from the event logs at most every 5 seconds. Also, everything written there is by default persisted forever on disk, so if we use it for high-throughput communication it will quickly saturate disk. So, the summary system as presently written is inappropriate for any real-time streaming application.

@jart is working on revamping the summary system to use sqlite and to support data streaming, so I'll let her chime in on whether she thinks the new summary system would be a good fit for your application.

@caisq has worked on establishing direct 2-way grpc communication between TensorFlow and TensorBoard. It would be ideal if we could leverage his work, but we've had difficulty open-sourcing it due to issues with some dependencies.

I think it would also be feasible for you to develop your own system, specific to this plugin, for getting data from TensorFlow to TensorBoard, and setting up 2way communication. Something like the following:

Let's suppose your plugin is called the RealTimeParameterVisualizer (maybe we'd come up with something catchier later ). Then you create a class tensorboard.plugins.real_time_parameter.ParameterWriter. The ParameterWriter is instantiated with a pointer to the logdir. And the user modifies their training code so that every step, it offers the model weights and the gradients to the ParameterWriter.

On instantiation, the ParameterWriter makes a directory within the logdir like logdir/plugins/real_time_parameter which contains a file called mode, which will be used by TensorBoard to communicate to the ParameterWriter.

On the TensorBoard backend side, you create tensorboard.plugins.real_time_parameter.RealTimeParameterPlugin. The RealTimeParameterPlugin is given the logdir by TensorBoard framework, and it takes responsibility for writing the mode. So based on user interaction it can change the mode from "off" to "parameter values", "gradient values", "gradient variance", etc.

The ParameterWriter uses poll checking to see when the mode changes from off. When the mode is not off, it begins dumping data to the filesystem containing the compressed parameter data for the frontend to visualize. We can think of an appropriate way to make sure that the amount of disk space used is bounded, e.g. by having the PW write to a new file every minute, and giving it responsibility for deleting (or downsampling) data older than 10 minutes.

In the example you gave of the million parameter model, if we want to show 16-value greyscale for each parameter at 30 FPS, that would be (10^6 values * 0.5 bytes/value * 30 per second) = 15MB/s which seems reasonable for writing/reading to disk, and processing without too much latency. Or, if we were willing to have 1 update per second, then it would be just 500KB/sec.

@jart / @wchargin Please share your thoughts too.

Now to answer your questions:

  1. Has this been done before in a way that is meant for lots of people to use? I couldn't find anything from a bit of Googling around, just people doing one-off things for their projects.

I'm not aware of any widely accessible version of this. I think it would be novel work, and quite valuable to the community.

  1. Can this be visualized as the model is running, as fast as every iteration without drastic slowdowns? For instance, using a 1 million parameter model where the parameters are reshaped into a 1000x1000 square.

See discussion above. I think we could accomplish something that feels fast to the user, and is near-real-time.

  1. Do you have any suggestions of features that you would find useful that I could add? Is there some other visualization tool that I should be building instead?

This seems like an interesting project to me. It could be made technically simpler by taking away the realtime component, and settling for getting data ~once per minute. But, you could focus more on building UI interaction and visualizations to really dive into the data and find ways to interpret the weights in context. E.g. serializing the weights and activations for k training examples, and looking at how the patterns of activations are different for different examples.

  1. Should I just build a GUI in Python or a standalone JS app instead of integrating with TensorBoard?

You could build a GUI on your own, and it will be easier for you to develop since you'll have control over everything, and won't be limited by TensorBoard's assumptions. However, convincing people to discover and use a new tool is always an uphill battle. I guess that if this is integrated into mainline TensorBoard, the usage will be several orders of magnitude higher than if you make a purely standalone tool.

Also tagging @colah and @shancarter as they may have thoughts to offer.

jart 2017-10-9
2

我的遗愿清单上的一件事就是开发一些可视化, 在这里我们使用 ffmpeg 实时编码数据并将其流到视频标签的浏览器中。所以我会有兴趣支持这样的事情。

原文:

One of the things on my bucket list has been to develop some type of visualization, where we encode data in real time using ffmpeg and stream it to the browser in a video tag. So I would be interested in supporting something like this.

chrisranderson 2017-10-9
3

哇太棒了我有点担心, 回答会像 "这应该是在谷歌集团, 而不是" 或其他一些解雇。:)所以, 当你说你想支持这个项目时, 这意味着什么?我破解了几天, 当我被卡住, 我可以向你求助?

如果我能有一只手在这里和那里, 我想尝试做这个在 TensorBoard。我有一个时间表, 我需要坚持-我开始在项目 6月26日, 并完成 8月14日, 所以我将开始在星期一。这是一个可以并入回购的项目吗?

此外, 对于第一步, 我想我会找出 grpc 是如何工作的。最近我用的是 ZMQ (也许根本就不关你的事?我在这里很无知)。我想我会想出如何发送图像从 Python 脚本到..。节点还是什么?我已经做了相当数量的 JS, 但我真的很多云, 我将如何与 TensorBoard。

谢谢你的回复!

原文:

Wow, awesome. I was a bit worried that the reply would be like "this should be on the google group instead" or some other dismissal. :) So, when you say you'd like to support the project, what does that mean? I hack on it for a few days, and when I get stuck I can ask you for help?

If I can have a hand here and there, I'd like to try doing this in TensorBoard. I have a timeline I need to stick to - I start on the project June 26th, and finish by August 14th, so I'll start on Monday. Is this a project that could get merged into the repo?

Also, for first steps, I think I'll figure out how grpc works. Closest thing I've used is ZMQ (maybe not close at all? I'm pretty ignorant here). I guess I'll figure out how to send images from a Python script to... Node or something? I've done a decent amount of JS, but I'm really cloudy on how I'll talk to TensorBoard.

Thanks for your responses!

jart 2017-10-9
4

根据我们的经验, gRPC 还没有完全准备好。我也有很多尊重 ZeroMQ, 但我不知道我们是否需要它。我们可能只是流 protobufs 在插座上使用 writeDelimitedTo () 和一个哨兵消息关闭 (以避免怪异的 TCP 边缘情况)。

"支持" 意味着我们可以把时间放在一边, 通过提供代码审查、回答问题以及进行任何可能需要的框架更改来参与开发过程。当有一个严密的反馈周期时, 这是最有效的。例如, 我们希望看到许多小的请求, 而不是一个大的代码转储。

我建议您检查web_library_example。这是一个示例, 说明如何在单独的存储库中进行 TensorBoard 开发, 而无需对基本代码进行分叉。您基本上需要一个BUILD工作区文件才能开始。

原文:

Based on our experience, gRPC isn't quite ready yet. I also have a lot of respect for ZeroMQ, but I'm not sure if we need it. We can probably just stream protobufs over a socket using writeDelimitedTo() and a sentinel message on close (to avoid weird TCP edge cases.)

"Support" means we can put the time aside to participate in the development process with you, by offering code reviews, answering questions, and making any framework changes you might need. This works best when there's a tight feedback cycle. For example, we like to see lots of small pull requests, rather than a one big code dump.

I would recommend is checking out web_library_example. It's an example of how to do TensorBoard development in a separate repository, without forking the codebase. You basically need a BUILD and WORKSPACE file to get started.

caisq 2017-10-9
5

@chrisranderson我认为你所描述的项目非常有趣, 对很多人来说都是非常有用的。这将有利于模型的解释, 理解和调试, 这是越来越重要的, 因为新类型的 DL 模型得到了每周的发明。TensorFlow 有 TensorBoard 和 TFDBG, 两者都有局限性。例如, TFDBG 允许您在运行时查看所有中间的张量值。但它目前所拥有的是在 shell 中基于文本的接口, 这对于在 TensorFlow 模型中可视化图形结构并不理想。TensorBoard 有很好的图形可视化, 但它与 TensorFlow 运行时的连接不是 real-time。TensorBoard 中 TensorFlow 的可视化调试器将是一个很好的功能。试想一下, 如果你可以 "步" 通过图的节点, 将它的输出张量想象为一个表、一个曲线、一个图像或一个视频, 你能看到和做什么。还可以修改张量值, 然后再继续图.。

TFDBG 已经有一个协议, 用于从 TF 运行时 real-time 流数据。但是, 正如@dandelionmane@jart指出的那样, 由于 gRPC 库中的一些尚未完成的功能请求, 这些函数在开源 tensorflow 中还没有完全发挥作用。我可以与 gRPC 团队在他们的时间线, 以满足功能要求。该请求主要与实施 py_grpc_library bazel genrule。即使将来的时间线太远, 我们也可以找到绕过丢失功能的方法, 并按照 tensorflow/核心/distributed_runtime 的方式执行它, 即在 c++ 中实现服务器。我们必须自己解决的部分就是把它包装起来, 以便它可以在 Python 中使用, 作为一个 TensorBoard 插件。上述协议的 c++ 库还没有完全开源, 但是我很快就可以让它们成为开源的。

在另一个框架中再次执行该协议之前, 我会三思而行, 因为这可能会给客户端造成不必要的重复工作和混淆。

原文:

@chrisranderson I think the project you described is very interesting and can be very useful for a lot of people. It will benefit model interpretation, understanding and debugging, which is getting more and more important as new types of DL models get invented every week. TensorFlow has TensorBoard and TFDBG, both of which has limitations. For example, TFDBG allows you to see all the intermediate tensor values during runtime. But all it currently has is a text-based interface in the shell, which is not ideal for visualizing the graph structures in TensorFlow models. TensorBoard has great graph visualization, but its connection with the TensorFlow runtime is not real-time. A visual debugger for TensorFlow in TensorBoard will be a great feature. Just imagine what you can see and do if you could "step" through nodes of a graph, visualize its output tensor as a table, a curve, an image or a video. You can also modify the tensor value before continuing further on the graph...

TFDBG already has a protocol for real-time streaming of data from TF runtime. But as @dandelionmane and @jart pointed out, due to some yet-unfulfilled feature requests in the gRPC library, these are not fully functional in open-source tensorflow yet. I can check with the gRPC team on their time line to fulfill the feature request. The request mainly has to do with implementing a py_grpc_library bazel genrule. Even if their timeline is too far in the future, we can find a way to bypass the missing feature and do it the same way as the way tensorflow/core/distributed_runtime does it, i.e., implement the server in C++. The part we have to work out ourselves is SWIG-wrapping it so that it can be used in Python, as a TensorBoard plugin. The C++ libraries of the aforementioned protocol is not fully open-source yet, but I can easily make them open-source soon.

I'll think twice before implementing the protocol again in another framework, as it may cause unnecessary duplicate work and confusion to clients.

caisq 2017-10-9
6

cc @chihuahua

原文:

cc @chihuahua

dandelionmane 2017-10-9
7

@chrisranderson正如贾斯汀 (@jart) 所说, 我们很乐意通过代码检查、回答问题以及在需要时进行上游更改来支持您。我认为, 根据贾斯汀的建议, 你应该为插件建立一个新的存储库, 并使用 bazel 规则来依赖它-分叉 web_library_example 是一个很好的出发点。我们也可以设置一个视频通话, 这样你可以问我们的问题, 如果你想。

该项目的目标将是让您的插件到一个点, 我们很舒服地吸收从您到 tensorboard/plugins 作为一个官方支持的插件。希望我们能在8月14日之前到达

正如你可以从这个帖子的前后看, 有很多不同的意见, 如何做 TensorFlow 和 TensorBoard 之间的沟通。就个人而言, 我会提倡一些简单的 (不太多的新依赖项), 并且可能在不同的平台和环境中工作, 比如写/读到磁盘。

最终 (一旦 gRPC 准备就绪), 我们可能会希望整合所有的东西, 使用与 TFDBG 相同的实现。因此, 我认为我的2c 将要么:

  1. 写一些简单和方便的东西 (例如, 从磁盘上写/读), 用一个合理的接口, 这样我们以后就可以用 gRPC 替换它了。
  2. 使用@caisq可以获得类似于调试器或分布式运行时现在所做的工作 (我只是害怕痛饮包装等将是一个兔子洞, 分散从实际得到插件工作)
原文:

@chrisranderson As Justine (@jart) said, we're happy to support you by doing code reviews, answering questions, and making upstream changes if you need them. I think per Justine's suggestion, you should make a new repository for the plugin and use bazel rules to depend on it - forking web_library_example is a good starting point. We can also set up a video call so you can ask us questions, if you want.

The goal for the project will be to get your plugin to a point where we are comfortable absorbing it from you into tensorboard/plugins as an officially supported plugin. Hopefully we'll reach that by August 14

As you can see from the back-and-forth on this thread, there are a lot of different opinions on how to do the communication between TensorFlow and TensorBoard. Personally, I would advocate for something that is simple (not too many new dependencies) and likely to work in different platforms and environments, like writing/reading to disk.

Eventually (once gRPC is ready) we will probably want to consolidate everything to use the same implementation as TFDBG. So I think my 2c would be either:

  1. write something simple and expedient (e.g. writing/reading from disk), with a reasonable interface, so we can later replace it with gRPC when that is ready
  2. work with @caisq to get something like what the debugger or distributed runtime does now (I am just scared that SWIG-wrapping etc will be a rabbit hole that distracts from actually getting the plugin to work)
caisq 2017-10-9
8

+1 @dandelionmane所说的内容。我认为在 TF 运行时和 TensorBoard 之间建立一个简单的通信通道是个好主意, 一旦 grpc 的 py_grpc_library genrule 准备就绪, 就可以轻松地替换它。

我将很乐意提供@dandelionmane所提到的那种支持。我也可以让你随时了解任何潜在的相关变化的 TFDBG。

原文:

+1 what @dandelionmane said. I think it's a good idea to build a simple communication channel between TF runtime and TensorBoard that can be easily replaced with grpc once its py_grpc_library genrule is ready.

I will be a happy to provide the kind of support that @dandelionmane mentioned as well. I can also you keep you abreast of any potentially relevant changes in TFDBG.

caisq 2017-10-9
9

@chrisranderson在上一篇文章中忘了提及: 所提到的文件写读选项@dandelionmane是对上述简单通信通道的一种很好的候选。TFDBG 可以写出 tensorflow事件 protobuf 文件到磁盘使用它的文件://调试 url。这个单元测试是一个开始阅读它的好地方:
https://github.com/tensorflow/tensorflow/blob/master/tensorflow/python/debug/lib/session_debug_testlib.py

TFDBG 还有用于读取此类文件及其目录结构的模块。除 API 文档外, 还请参见上面的测试:
https://www.tensorflow.org/api_docs/python/tfdbg/DebugDumpDir
https://www.tensorflow.org/api_docs/python/tfdbg/DebugTensorDatum

原文:

@chrisranderson forgot to mention in the previous post: the file write-read option @dandelionmane mentioned is a good candidate for the kind of simple communication channel mentioned above. TFDBG's can write out tensorflow.Event protobuf files to the disk using its file:// debug URLs. This unit test is a good place to start reading about it:
https://github.com/tensorflow/tensorflow/blob/master/tensorflow/python/debug/lib/session_debug_testlib.py

TFDBG also has modules for reading such files and their directory structures. See also the test above, in addition to the API doc at:
https://www.tensorflow.org/api_docs/python/tfdbg/DebugDumpDir
https://www.tensorflow.org/api_docs/python/tfdbg/DebugTensorDatum

chrisranderson 2017-10-9
10

好吧, 根据我所读到的, 我的总体计划 (非常非常模糊) 是:

  • 叉子 web_library_example 这将是这个项目的主要回购, 基本上通读所有的东西, 并试图了解发生了什么事。
  • 通过其他 TensorBoard 插件和 TensorBoard 本身来找出如何编写从文件中读取和写入的内容。客户端的事情, 我基本上不知道我在做什么。我有一些 JS 和网络开发的背景, 但这是关于它。
  • 了解如何编写自己的编写器以将张量保存到磁盘。这可能包括通过 TensorBoard 代码来查看其他人是如何操作的, 并且只是模仿我在那里找到的模式。我不是很担心完成这个。
  • 最终达到了我可以将图像写入磁盘 "服务器端" 并在浏览器中显示它的地步。
  • 产生一个具体问题的清单, 一旦我有更好的比 "我不知道我在做什么, 请帮助":)

您想在这里继续交流, 还是我应该开始对我的分叉回购进行讨论?我从来没有在 GitHub 上做过这么严重的事我是否愿意将问题用于任务管理?为我正在处理的每件事情打开一个问题, 每个提交都对应一个问题?

再次感谢!我很激动, 紧张得走了。我从星期一开始

原文:

Okay, based on what I've read, my overall plan (which is very, very hazy) is:

  • fork web_library_example which will be the main repo for this project, and basically read through everything there and try to understand what's going on.
  • look through other TensorBoard plugins and TensorBoard itself to figure out how to write something that reads and writes from a file. The client side of things I have basically zero idea what I'm doing. I've got some JS and web development background, but that's about it.
  • read up on how to write my own writer to save tensors to disk. That might consist of looking through TensorBoard code to see how others do it, and just mimicking patterns I find there. I'm not super worried about accomplishing this.
  • Eventually hit the point where I can write an image to disk "server side" and display it in a browser.
  • Produce a list of concrete questions once I have better ones than "I have no idea what I'm doing please help" :)

Would you like to continue communication here, or should I start making issues on my forked repo? I've never really done much that seriously on GitHub. Is it preferred that I use the issues for task management? Open an issue for every thing I'm working on and every commit corresponds to an issue?

Thanks again! I'm excited and nervous to get going. I'll start Monday.

dandelionmane 2017-10-9
11

听起来像是个合理的计划。您需要绕过贝斯文档来了解 web_library_example。

对于沟通, 如果你链接你的分叉回购, 我会看它, 并回应你张贴在那里的问题。我认为这可能是清洁比使用这个线程的一切。如果你找不到我们的麻烦, 戳我们这里。

原文:

That sounds like a reasonable plan. You'll want to poke around the baze docs to understand the web_library_example.

For communication, if you link your forked repo, I'll watch it and respond to issues that you post there. I think that may be cleaner than using this thread for everything. If you have trouble getting a hold of us, poke us here.

jart 2017-10-9
12

我也可以跟着如果您每次有问题时都在新的存储库中发布一个问题, 那么它就会变得像一个堆栈溢出, 如何使用 Bazel 扩展 TensorBoard。但在所有的公平, 他们可能会得到更好的搜索排名, 如果这些问题都张贴在这里, 在 TensorBoard 的堆栈溢出。您认为@dandelionmane是什么?我倾向于后者。

原文:

I can follow it too. If you post an issue in your new repository every time you have a question, then it can sort of become like a stack overflow for how to extend TensorBoard with Bazel. But in all fairness, they might get better search rankings if the questions are posted either here, on on TensorBoard's Stack Overflow. What do you think @dandelionmane? I'm leaning towards the latter.

chrisranderson 2017-10-9
13

是回购, 下面是我的第一组问题: chrisranderson/beholder#1

我不介意写一些类型的指南或博客后, 这是所有关于写一个插件的 TensorBoard-也许你们都可以把它和编辑到死亡, 并张贴在某个地方?

原文:

Here is the repo, and here is my first set of questions: chrisranderson/beholder#1.

I wouldn't mind writing up some type of guide or blog post after this is all done about writing a plugin for TensorBoard - maybe you all could take it and edit to death and post it somewhere?

dandelionmane 2017-10-9
14

这将是如此之大-你将是第一个外部贡献者编写一个 TB 插件, 和一个写它如何做将使它更容易让其他人跟随你的脚步。

原文:

That would be so great - you are gonna be the first external contributor to write a TB plugin, and a write-up on how it's done would make it a lot easier for other people to follow in your footsteps.

chrisranderson 2017-10-9
15

我今天把我的项目提交给 CS 部门, 并通过!:)我想我现在可以结束这个问题了。

如果有人对此项目的未来感兴趣, 您可以在这里找到一个讨论: chrisranderson/beholder#33

谢谢你的帮助!

原文:

I presented on my project today to the CS department, and passed! :) I guess I can close this issue now.

If anyone is interested in the future of this project, you can find a discussion here: chrisranderson/beholder#33

Thank you for your help!

wchargin 2017-10-9
16

哇-恭喜!!

原文:

Whoa—congratulations!!

jart 2017-10-9
17

恭喜!

原文:

Congrats!

chihuahua 2017-10-9
18

当之无愧!

原文:

Well deserved!

caisq 2017-10-9
19

恭喜!

原文:

Congrats!

luchensk 2017-10-9
20

@caisq基于您对 gRPC 的评论, 只是为了更加确定, gRPC 是否已准备好在 TF 调试器和 TB 调试器之间设置2way 通信?
我注意到, 在 TF 和 TB 的两个回购中已经存在一些调试器代码。
谢谢.

原文:

@caisq Based on your comment about gRPC, just to make more sure, is gRPC ready for setting up 2way communication between TF debugger and TB debugger, for now?
I noted that some code for debugger has already existed in the two repos of TF and TB.
Thanks.

返回
发表文章
chrisranderson
文章数
6
评论数
21
注册排名
60763