rssLink RSS for all categories
 
icon_red
icon_green
icon_red
icon_red
icon_red
icon_green
icon_green
icon_red
icon_red
icon_red
icon_orange
icon_green
icon_green
icon_green
icon_red
icon_blue
icon_red
icon_orange
icon_red
icon_red
icon_red
icon_red
icon_red
icon_red
icon_red
icon_red
icon_red
icon_orange
icon_green
 

FS#199 — FS#4287 — LCL Bruxelles

Attached to Project— Network
Incident
the whole network
CLOSED
100%
We are facing a problem on one of our equipments at Brussels Interxion. This impacts the traffic towards the BNIX and ChtIX customers connected to this site.

Date:  Wednesday, 16 June 2010, 13:09PM
Reason for closing:  Done
Additional comments about closing:  We are back to normal. The chtix customers on LCL are now restored.
In case of problem, please do not hesitate to contact noc@ovh.net.
Comment by OVH - Wednesday, 16 June 2010, 10:19AM

It is a problem on a switch situated in upstream at the level of LCL Diegem. We intend to make a restarting of the equipment. If it fails,
we will send a technician in emergency for a replacement with a spare.


Comment by OVH - Wednesday, 16 June 2010, 10:54AM

The problem comes probably from the link bru-1<>bru-2 on LCL. This can be a default as well on the switch force10 itself. We will send a technician
to the scene with all the necessary equipment.


Comment by OVH - Wednesday, 16 June 2010, 10:59AM

To circle the problem temporarily, we bypassed bru-2-f10 in order to restore the traffic to bru-3 (Interxion) and BNIX. Only chtix customers
on LCL will be still affected.


Comment by OVH - Wednesday, 16 June 2010, 12:44PM

We have a loss of packets on the 6K / F10 link.
We have updated the bru-1-6k router
http://travaux.ovh.com/?do=details&id=4288
always the same.

We thank that the optics which is in the router
is not really good for the link between both data centres.
It functioned with a switch between both of them, yet, it
did not work directly.

We will be on the scene to replace the switch
which was broken today by a spare.

Around 12h30, the problem must be resolved.

Meanwhile, we will cut off BNIX.


Comment by OVH - Wednesday, 16 June 2010, 12:45PM

Hi,

There's currently an issue on the BNIX. We're investigating and hope to
resolve it as soon as possible.

Best regards,
Dear Member,

Aside from the first looped port, the BNIX staff identified a second
looped port and removed this one ASAP from the config. Everything is
back to normal now.

We acknowledge that the error lies with BNIX and we wish to apologize
for the degraded service.

Kind Regards,

-- BNIX Tech Team


Comment by OVH - Wednesday, 16 June 2010, 12:45PM

Jun 16 11:15:41 20G.bru-1-6k.routers.ovh.net 147: Jun 16 10:15:23 GMT: %IPV6-4-DUPLICATE: Duplicate address FE80::213:5FFF:FE1C:6500 on TenGigabitEthernet1/2.20
Jun 16 11:05:11 20G.bru-1-6k.routers.ovh.net 88: .Jun 16 10:04:55 GMT: %IPV6-4-DUPLICATE: Duplicate address FE80::213:5FFF:FE1C:6500 on TenGigabitEthernet1/2.20
Jun 16 09:33:42 20G.bru-1-6k.routers.ovh.net 606896: Jun 16 08:33:25 GMT: %IPV6-4-DUPLICATE: Duplicate address FE80::213:5FFF:FE1C:6500 on TenGigabitEthernet1/2.20
Jun 16 09:26:29 20G.bru-1-6k.routers.ovh.net 606853: Jun 16 08:26:12 GMT: %IPV6-4-DUPLICATE: Duplicate address FE80::213:5FFF:FE1C:6500 on TenGigabitEthernet1/2.20


Comment by OVH - Wednesday, 16 June 2010, 12:49PM

So, it seems that the problem is due to a loop on BNIX.
Our switches were overloaded and did not respond to pings. We
therefore thought about a hardware from our side.

We have re-put in place all of the BNIX peers. There is
no other problem..


Comment by OVH - Wednesday, 16 June 2010, 12:54PM

We reswitch the infra on the LCL in the nominal config.